Fix: Laravel Queue Job Not Processing — Jobs Stuck in Queue
Quick Answer
How to fix Laravel queue jobs not running — queue worker not started, wrong connection config, failed jobs, job timeouts, horizon setup, and database vs Redis queue differences.
The Problem
A Laravel queued job is dispatched but never executes:
ProcessOrder::dispatch($order);
// Job added to queue... but nothing happensThe job sits in the jobs table indefinitely:
SELECT * FROM jobs;
-- id | queue | payload | attempts | reserved_at | available_at | created_at
-- 1 | default | {...} | 0 | NULL | 1711234567 | 1711234567
-- Job never gets picked upOr the worker runs but jobs fail silently:
php artisan queue:work
# [2026-03-22 10:00:00] Processing: App\Jobs\ProcessOrder
# [2026-03-22 10:00:01] Failed: App\Jobs\ProcessOrder
# No error message visibleOr in a fresh environment, jobs dispatch but workers can’t connect:
[Illuminate\Queue\InvalidPayloadException]
Unable to JSON encode payload. Error code: 5Why This Happens
Laravel’s queue system requires a separate worker process to poll and execute jobs. The most common cause of “jobs not processing” is simply that no worker is running. Beyond that:
- No worker running —
queue:workorqueue:listenmust be running continuously as a separate process. Dispatching a job only writes it to the queue store (database, Redis, SQS, etc.); the worker reads and executes it. - Wrong queue connection — the app dispatches to
redisbut the worker is pollingdatabase, or the job specifies a named queue (emails) but the worker only listens todefault. - Worker stopped after a code change —
queue:workcaches the application on startup. After deploying new code, workers must be restarted (queue:restart) to pick up changes. - Failed job not visible — jobs that fail are moved to the
failed_jobstable, not thejobstable. They appear “gone” but actually failed. - Job serialization error — Eloquent models in job constructors are serialized using
SerializesModels. If the model is deleted before the job runs, the job fails with aModelNotFoundException. - Queue driver not configured —
.envhasQUEUE_CONNECTION=sync(runs jobs immediately in the same request, but only in the web context) instead ofdatabaseorredis.
Fix 1: Start the Queue Worker
The most common fix — ensure a worker is actually running:
# Start a worker for the default queue on the default connection
php artisan queue:work
# Start with verbose output to see job processing
php artisan queue:work --verbose
# Process a specific connection and queue
php artisan queue:work redis --queue=emails,default
# Process only one job then exit (useful for testing)
php artisan queue:work --once
# Run with a timeout (kills jobs taking longer than N seconds)
php artisan queue:work --timeout=60
# queue:listen vs queue:work:
# queue:listen — restarts the worker after every job (picks up code changes automatically, slower)
# queue:work — keeps the worker alive (faster, but requires restart after code changes)
php artisan queue:listenCheck if a worker is running:
# Linux
ps aux | grep "queue:work"
# Or check if there's any artisan worker process
ps aux | grep "artisan"For production — use Supervisor to keep the worker alive:
; /etc/supervisor/conf.d/laravel-worker.conf
[program:laravel-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/html/artisan queue:work redis --sleep=3 --tries=3 --timeout=90
autostart=true
autorestart=true
stopasgroup=true
killasgroup=true
user=www-data
numprocs=2 ; Run 2 worker processes
redirect_stderr=true
stdout_logfile=/var/www/html/storage/logs/worker.log# Apply the Supervisor config
supervisorctl reread
supervisorctl update
supervisorctl start laravel-worker:*Fix 2: Verify Queue Configuration
Check that the app’s queue connection matches where the worker is listening:
# .env file
QUEUE_CONNECTION=database # Must match what the worker polls
# Options: sync, database, redis, beanstalkd, sqs
# For Redis, also set:
REDIS_HOST=127.0.0.1
REDIS_PORT=6379
REDIS_PASSWORD=null// config/queue.php — verify the connection is configured correctly
'connections' => [
'database' => [
'driver' => 'database',
'table' => 'jobs', // Must match the migration table name
'queue' => 'default',
'retry_after' => 90,
],
'redis' => [
'driver' => 'redis',
'connection' => 'default', // Must match a connection in config/database.php redis section
'queue' => env('REDIS_QUEUE', 'default'),
'retry_after' => 90,
'block_for' => null,
],
],Mismatch between dispatch queue and worker queue:
// Job dispatched to 'emails' queue
ProcessOrder::dispatch($order)->onQueue('emails');
// But worker only listens to 'default':
// php artisan queue:work
// Fix: specify the queue explicitly
// php artisan queue:work --queue=emails,defaultCheck the queue the job was dispatched to:
-- Check what queue the job is waiting in
SELECT queue, COUNT(*) FROM jobs GROUP BY queue;
-- Result: emails | 5 → worker must listen to 'emails'Fix 3: Restart Workers After Deployment
queue:work bootstraps the Laravel application once on startup and keeps it in memory. New code deployed after the worker started isn’t picked up:
# Signal all workers to gracefully restart after the current job finishes
php artisan queue:restart
# Workers poll for this signal and restart when idle
# The restart command stores a timestamp in the cache — workers check it periodicallyAdd queue restart to your deploy script:
#!/bin/bash
# deploy.sh
git pull origin main
composer install --no-dev --optimize-autoloader
php artisan migrate --force
php artisan config:cache
php artisan route:cache
php artisan queue:restart # ← Restart workers to pick up code changesWith Supervisor — restart automatically after deploy:
# Or restart Supervisor entirely
supervisorctl restart laravel-worker:*Note:
queue:restartwon’t work if you’re using thefileorarraycache driver (the restart signal is stored in cache). Useredisordatabaseas the cache driver in production.
Fix 4: Debug Failed Jobs
Jobs that fail move to the failed_jobs table. They don’t show in jobs:
# List all failed jobs
php artisan queue:failed
# ID | Connection | Queue | Class | Failed At
# 1 | redis | emails| App\Jobs\ProcessOrder | 2026-03-22 10:00:01
# Show the exception for a specific failed job
php artisan queue:failed --id=1
# Or view in database:SELECT id, exception, failed_at FROM failed_jobs ORDER BY failed_at DESC LIMIT 10;
-- The 'exception' column contains the full stack traceRetry a failed job:
# Retry a specific failed job
php artisan queue:retry 1
# Retry all failed jobs
php artisan queue:retry all
# Delete a failed job
php artisan queue:forget 1
# Clear all failed jobs
php artisan queue:flushMake sure failed_jobs table exists:
# Create the failed_jobs table if missing
php artisan queue:failed-table
php artisan migrateFix 5: Fix Job Serialization and Model Binding
Jobs that store Eloquent models in their constructor use SerializesModels to serialize only the model’s ID. When the job runs, it re-fetches the model from the database. If the model was deleted between dispatch and execution, the job fails:
// WRONG — model may be deleted before job runs
class SendWelcomeEmail implements ShouldQueue
{
use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;
public function __construct(
public User $user // Serialized as App\Models\User:123
) {}
public function handle(): void
{
// If user was deleted, this throws ModelNotFoundException
mail($this->user->email, 'Welcome!', '...');
}
}// CORRECT — handle the case where the model no longer exists
public function handle(): void
{
// $this->user is re-fetched automatically via SerializesModels
// It throws ModelNotFoundException if not found — catch it or use soft deletes
if (!$this->user) {
return; // User deleted — job is no longer relevant
}
mail($this->user->email, 'Welcome!', '...');
}
// Or: use the model's ID instead to avoid automatic re-fetching
public function __construct(
public int $userId
) {}
public function handle(): void
{
$user = User::find($this->userId);
if (!$user) return; // User deleted, skip
// ...
}Avoid unserializable data in jobs:
// WRONG — Closures can't be serialized
ProcessOrder::dispatch(function() { /* ... */ });
// WRONG — Resource types (file handles, database connections) can't be serialized
class ProcessFile implements ShouldQueue
{
public function __construct(
public $fileHandle // Can't serialize a resource
) {}
}
// CORRECT — pass the file path, open the handle in handle()
class ProcessFile implements ShouldQueue
{
public function __construct(
public string $filePath
) {}
public function handle(): void
{
$handle = fopen($this->filePath, 'r');
// ...
fclose($handle);
}
}Fix 6: Handle Job Timeouts and Retries
Jobs that run longer than the worker’s --timeout are killed with a SIGKILL, and the job is marked as failed:
class ProcessLargeReport implements ShouldQueue
{
use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;
// Set a per-job timeout (takes precedence over --timeout flag)
public int $timeout = 300; // 5 minutes
// Number of retry attempts
public int $tries = 3;
// Delay between retries
public int $backoff = 60; // 60 seconds
// Only retry until this time
public function retryUntil(): \DateTime
{
return now()->addHours(2);
}
public function handle(): void
{
// Job logic
}
// Called when all retries are exhausted
public function failed(\Throwable $exception): void
{
// Notify team, update database status, etc.
Log::error('Report generation failed', [
'exception' => $exception->getMessage(),
]);
}
}Configure retries with exponential backoff:
// Return different delay per attempt
public function backoff(): array
{
return [30, 60, 120]; // 30s after 1st failure, 60s after 2nd, 120s after 3rd
}Fix 7: Use Laravel Horizon for Redis Queue Monitoring
For Redis-backed queues, Laravel Horizon provides a dashboard for monitoring job throughput, failed jobs, and worker status:
composer require laravel/horizon
php artisan horizon:install
php artisan migrate# Start Horizon (replaces queue:work for Redis queues)
php artisan horizon
# Horizon dashboard available at /horizon// config/horizon.php — configure worker pools
'environments' => [
'production' => [
'supervisor-1' => [
'maxProcesses' => 10,
'balanceMaxShift' => 1,
'balanceCooldown' => 3,
],
],
'local' => [
'supervisor-1' => [
'maxProcesses' => 3,
],
],
],Horizon also shows:
- Real-time job throughput (jobs per minute)
- Failed jobs with full stack traces
- Queue depth per queue
- Worker process count
Still Not Working?
QUEUE_CONNECTION=sync in .env — sync runs jobs immediately in the current process (useful for local development/testing), not asynchronously. Set to database or redis for true async processing. After changing .env, run php artisan config:clear.
Cache config is stale — if you’ve changed queue.php or .env, clear the cached config: php artisan config:clear && php artisan config:cache.
Database queue: jobs table not created — run php artisan queue:table && php artisan migrate to create the jobs table.
Redis connection refused — if using Redis queue, verify Redis is running: redis-cli ping should return PONG. Check the REDIS_HOST and REDIS_PORT in .env.
Job dispatched in a test with Queue::fake() — if Queue::fake() is called in a test, jobs are intercepted and not actually dispatched to a queue. This is intentional for testing, but make sure production code doesn’t have Queue::fake() active:
// In tests
Queue::fake();
ProcessOrder::dispatch($order);
Queue::assertPushed(ProcessOrder::class); // Verify it was dispatched (not executed)
// For integration tests where you want jobs to run:
// Don't call Queue::fake() — or use a real queue connectionFor related issues, see Fix: Celery Task Not Executing and Fix: Redis Pub/Sub Not Working.
Solo developer based in Japan. Every solution is cross-referenced with official documentation and tested before publishing.
Was this article helpful?
Related Articles
Fix: BullMQ Not Working — Jobs Not Processing, Workers Not Starting, or Redis Connection Failing
How to fix BullMQ issues — queue and worker setup, Redis connection, job scheduling, retry strategies, concurrency, rate limiting, event listeners, and dashboard monitoring.
Fix: Redis Cluster Not Working — MOVED, CROSSSLOT, or Connection Errors
How to fix Redis Cluster errors — MOVED redirects, CROSSSLOT multi-key operations, cluster-aware client setup, hash tags for key grouping, and failover handling.
Fix: PHP Session Not Working — $_SESSION Variables Lost Between Requests
How to fix PHP session variables that don't persist between requests — session_start() placement, cookie settings, session storage, shared hosting, and session fixation security.
Fix: Redis Pub/Sub Not Working — Messages Not Received by Subscribers
How to fix Redis Pub/Sub issues — subscriber not receiving messages, channel name mismatches, connection handling, pattern subscriptions, and scaling with multiple processes.