Copied!
Programming
Laravel
PHP

Running Laravel Queue Workers in Production – Supervisor, Shared Hosting & Deployment

Running Laravel Queue Workers in Production – Supervisor, Shared Hosting & Deployment
Shahroz Javed
Mar 17, 2026 . 62 views

The Core Problem

Running php artisan queue:work in your terminal works fine for development. But the moment you close that terminal, the worker dies and no jobs are processed.

In production, you need a process manager — a system that:

  • Starts the worker automatically when the server boots

  • Restarts the worker if it crashes or runs out of memory

  • Runs multiple worker processes in parallel for high throughput

  • Gracefully restarts workers after code deployments

The solution depends on your hosting environment. We'll cover both VPS/dedicated servers (Supervisor) and shared hosting (Cron workarounds).

Worker Options Reference

Before setting up process management, understand all the options available to queue:work:

php artisan queue:work \
  --connection=redis \        # which queue connection to use
  --queue=emails,default \   # queues to process (priority order, left = highest)
  --tries=3 \                # max attempts before marking as failed
  --timeout=60 \             # kill the job if it runs longer than 60 seconds
  --sleep=3 \                # seconds to sleep when queue is empty (reduces CPU)
  --max-jobs=500 \           # stop worker after processing 500 jobs (prevents leaks)
  --max-time=3600 \          # stop worker after 1 hour (prevents leaks)
  --memory=128 \             # stop if memory exceeds 128MB
  --force                    # process jobs even in maintenance mode
⚠️ Always set --timeout slightly less than your job's $timeout property. The worker timeout kills the process at the OS level — the job timeout is a PHP-level check. Both should be aligned.

Supervisor on a VPS / Dedicated Server

Supervisor is a process control system for Linux. It's the standard tool for keeping Laravel queue workers alive in production on VPS or dedicated servers.

Step 1: Install Supervisor

# Ubuntu / Debian
sudo apt-get install supervisor

# CentOS / RHEL
sudo yum install supervisor

Step 2: Create a Supervisor Configuration File

Create a config file for your Laravel worker in /etc/supervisor/conf.d/:

sudo nano /etc/supervisor/conf.d/laravel-worker.conf
[program:laravel-worker]
process_name=%(program_name)s_%(process_num)02d
command=/usr/local/bin/php /var/www/html/artisan queue:work redis --queue=emails,default --sleep=3 --tries=3 --timeout=90 --max-jobs=1000
autostart=true
autorestart=true
stopasgroup=true
killasgroup=true
user=www-data
numprocs=4
redirect_stderr=true
stdout_logfile=/var/www/html/storage/logs/worker.log
stdout_logfile_maxbytes=10MB
stdout_logfile_backups=3
stopwaitsecs=3600

Key Configuration Explained

  • numprocs=4 — runs 4 parallel worker processes. Each handles one job at a time. Increase for higher throughput.

  • autostart=true — starts the worker when the server boots

  • autorestart=true — restarts the worker if it exits for any reason

  • user=www-data — run as the web server user to match file permissions

  • stopwaitsecs=3600 — wait up to 1 hour for current jobs to finish before force-killing on stop. Set this to your longest possible job duration.

  • stdout_logfile — logs all output to Laravel's storage folder. Check this file when debugging worker issues.

Step 3: Start Supervisor

# Reload supervisor config
sudo supervisorctl reread
sudo supervisorctl update

# Start the workers
sudo supervisorctl start laravel-worker:*

# Check status
sudo supervisorctl status

# Stop all workers
sudo supervisorctl stop laravel-worker:*

# Restart all workers
sudo supervisorctl restart laravel-worker:*

Multiple Worker Groups (Advanced)

For high-traffic apps, run separate Supervisor groups for different queues with different priority:

[program:laravel-worker-high]
command=/usr/local/bin/php /var/www/html/artisan queue:work redis --queue=payments --tries=3
numprocs=4
; ...

[program:laravel-worker-default]
command=/usr/local/bin/php /var/www/html/artisan queue:work redis --queue=emails,default --tries=3
numprocs=2
; ...

Shared Hosting – Cron & Workarounds

Shared hosting is the most common situation for beginner developers. You don't have root access, can't install Supervisor, and can't run long-running processes.

Here are two real-world solutions:

Option 1: Cron + stop-when-empty

Set up a cron job that launches a worker every minute. The --stop-when-empty flag makes the worker process all available jobs and then exit. This way, you never have a permanent process — just a short burst every minute.

# Add to crontab (via cPanel Cron Jobs or SSH)
* * * * * /usr/local/bin/php /home/yourusername/public_html/artisan queue:work --stop-when-empty --tries=3 >> /dev/null 2>&1

This works for low to moderate job volumes. The downside is jobs can wait up to 1 minute before processing.

Option 2: Background Process via PHP (Advanced Shared Hosting)

Some hosting providers allow launching background processes from PHP. You can trigger this from a controller or a route that gets called during the request lifecycle:

// Trigger a background queue:work process (Windows hosting — IIS)
popen('start /B php ' . base_path() . '/artisan queue:work --stop-when-empty > NUL 2>NUL &', 'r');

// Linux shared hosting
exec('nohup /usr/local/bin/php ' . base_path() . '/artisan queue:work --stop-when-empty > /dev/null 2>&1 &');
⚠️ This approach is unreliable and not recommended for critical jobs. If the server kills background processes (many shared hosts do), your jobs won't run. Use cron as a safer fallback. For production apps with real traffic, invest in a VPS.

Option 3: Laravel Scheduler to Start the Worker

If you already have the Laravel scheduler running via cron, you can use it to trigger the queue worker:

// In app/Console/Kernel.php (Laravel 10) or routes/console.php (Laravel 11)
$schedule->command('queue:work --stop-when-empty')->everyMinute()->withoutOverlapping();

// And your crontab just has the scheduler (one cron entry manages everything):
// * * * * * php /home/username/public_html/artisan schedule:run >> /dev/null 2>&1

Deployment Strategy – Zero Downtime

This is where most developers make critical mistakes. When you deploy new code, your queue workers are still running the old code in memory. If you push new job classes or change existing ones, workers processing old code will fail or behave incorrectly.

The Right Deployment Flow

# Step 1: Pull your latest code
git pull origin main

# Step 2: Install/update dependencies
composer install --no-dev --optimize-autoloader

# Step 3: Run migrations
php artisan migrate --force

# Step 4: Clear and cache config/routes/views
php artisan config:cache
php artisan route:cache
php artisan view:cache

# Step 5: Signal workers to gracefully restart
# (they finish current job, then restart with new code)
php artisan queue:restart

How queue:restart Works

queue:restart doesn't kill workers immediately. It stores a timestamp in the cache. Each worker checks this timestamp after finishing each job. When a worker sees the timestamp has changed, it exits gracefully — and Supervisor restarts it with the new code.

⚠️ For queue:restart to work, your cache driver must be shared across all processes — use Redis or database cache, not file or array. Workers use the cache to detect the restart signal.

Laravel Envoyer — Zero Downtime Deployment

If you use Laravel Envoyer for deployments, it handles queue:restart automatically as part of the deployment hook. Just add it to your deployment hooks and workers restart cleanly after every deploy.

Memory Leaks & Long-Running Workers

PHP was designed for short-lived request cycles, not long-running processes. Queue workers run for hours or days — and PHP memory doesn't always get freed properly. Over time, workers can grow to consume hundreds of MB.

Use --max-jobs and --max-time

The safest protection against memory leaks is to periodically restart workers:

# Stop after 1000 jobs (Supervisor will restart it immediately)
php artisan queue:work --max-jobs=1000

# Stop after 1 hour (Supervisor restarts it)
php artisan queue:work --max-time=3600

# Or set a memory limit — stop if usage exceeds 256MB
php artisan queue:work --memory=256

With Supervisor's autorestart=true, the worker is immediately replaced when it exits. No jobs are lost — the new process picks up from where the queue left off.

Common Causes of Memory Leaks in Workers

  • Eloquent models with $with relationships loading large datasets

  • Event listeners that accumulate fired events in memory

  • Log handlers that buffer too much in memory

  • Static properties or singletons that grow over time

Run the worker with --memory=128 during development. If it triggers often, you have a memory leak to fix.

Monitoring Your Queue

Flying blind in production is dangerous. You should always know how many jobs are in the queue, how fast they're being processed, and how many are failing.

Artisan Commands

# List all failed jobs
php artisan queue:failed

# Monitor queue size (database driver)
php artisan queue:monitor database:50,redis:100
# Fires an event if queue size exceeds the threshold

# Retry all failed jobs
php artisan queue:retry all

# Clear all pending jobs from a queue
php artisan queue:clear redis --queue=emails

Laravel Horizon (Redis Only)

If you're using the Redis driver, install Horizon for a real-time dashboard:

composer require laravel/horizon
php artisan horizon:install
php artisan horizon  # replaces queue:work

Horizon provides: job throughput charts, wait time per queue, failed job browser, tag-based filtering, and auto-balancing worker count based on queue load. Access it at yourdomain.com/horizon.

Protect the Horizon Dashboard

// In app/Providers/HorizonServiceProvider.php
protected function gate(): void
{
    Gate::define('viewHorizon', function ($user) {
        return in_array($user->email, [
            'admin@yoursite.com',
        ]);
    });
}

Queue Health Check Endpoint

For server monitoring tools (UptimeRobot, Pingdom, etc.), create a simple health check endpoint that verifies the queue is processing:

// routes/web.php
Route::get('/health/queue', function () {
    $failedCount = DB::table('failed_jobs')->count();
    $pendingCount = DB::table('jobs')->count();

    return response()->json([
        'status'  => $failedCount > 10 ? 'warning' : 'ok',
        'pending' => $pendingCount,
        'failed'  => $failedCount,
    ]);
})->middleware('auth.basic');
💡 Series: This is part of our complete Laravel Queue series. Start with Laravel Queue Complete Introduction if you're new to queues.

Conclusion

Getting queue workers running reliably in production is one of the most important infrastructure tasks for any Laravel application. Here's the complete checklist:

  • Use Supervisor on VPS/dedicated servers — it's the standard solution

  • Set numprocs based on expected job volume and server resources

  • For shared hosting, use cron with --stop-when-empty as the safest option

  • Always run php artisan queue:restart after every deployment

  • Use --max-jobs or --max-time to periodically restart workers and prevent memory leaks

  • Use stopwaitsecs in Supervisor to allow long jobs to finish before forcing a stop

  • Use Laravel Horizon for Redis queues — it transforms how you monitor production queues

  • Add a health check endpoint so your monitoring tools can alert you when queues back up

📑 On This Page