N+1 Queries Inside Jobs
The N+1 problem is twice as dangerous inside jobs because it's invisible.
Your app seems to work fine — until a batch of 10,000 jobs runs and your database melts.
// ❌ Anti-pattern: N+1 in handle()
class SendOrderConfirmation implements ShouldQueue
{
public function __construct(protected Order $order) {}
public function handle(): void
{
// $order->items triggers a new query inside every job
foreach ($this->order->items as $item) {
// $item->product triggers ANOTHER query per item
echo $item->product->name;
}
}
}
// ✅ Correct: Eager load inside handle()
public function handle(): void
{
$order = $this->order->load('items.product', 'customer');
foreach ($order->items as $item) {
echo $item->product->name; // no extra queries
}
}
⚠️ Never rely on $with in model classes for jobs. The model is serialized and re-fetched fresh — relationships defined in $with don't apply to re-fetched models. Always eager load explicitly inside handle().
No Timeout = Zombie Workers
A job that calls an external API with no timeout can hang forever.
The worker process sits frozen, holding one of your limited worker slots.
Add more frozen jobs and you've paralyzed your entire queue.
// ❌ Anti-pattern: No timeout anywhere
class CallExternalService implements ShouldQueue
{
public function handle(): void
{
// If the API hangs, this worker hangs forever
Http::post('https://slow-api.example.com/data', $this->payload);
}
}
// ✅ Correct: Timeout at every level
class CallExternalService implements ShouldQueue
{
public int $timeout = 30; // Kill this job if it runs > 30 seconds
public function handle(): void
{
// Also set HTTP client timeout — don't wait more than 10 seconds for API
Http::timeout(10)->post('https://api.example.com/data', $this->payload);
}
}
You need timeouts at three levels:
Job $timeout — kills the worker process if the job runs too long (OS level, most reliable)
HTTP client timeout — stops waiting for the external API to respond
Worker --timeout — the fallback at the worker level (should match the job's $timeout)
Dispatching in a Loop
This is one of the most common beginner mistakes. Dispatching hundreds or thousands of jobs
in a foreach loop blocks the request and creates enormous overhead on your queue backend.
// ❌ Anti-pattern: Dispatch inside a loop
public function sendNewsletter(Newsletter $newsletter)
{
$users = User::subscribed()->get(); // loads 50,000 users into memory
foreach ($users as $user) {
// Each dispatch = one database write (or Redis call)
// 50,000 dispatches blocks the request for seconds
SendNewsletterToUser::dispatch($user, $newsletter);
}
}
// ✅ Correct: One dispatcher job that batches work internally
public function sendNewsletter(Newsletter $newsletter)
{
// Instant response — all heavy lifting goes to the queue
DispatchNewsletter::dispatch($newsletter);
}
The dispatcher job uses User::chunk() so it never loads all records into memory,
and dispatches jobs in bulk using Bus::batch().
Passing Full Models / Large Data as Payload
Jobs are serialized and stored in your queue backend. Passing large objects creates huge payloads.
This matters especially for SQS (256KB limit) and Redis memory usage.
// ❌ Anti-pattern: Passing the full model with all its data
class ProcessReport implements ShouldQueue
{
// Serialized User could include all loaded relationships: orders, profiles, etc.
public function __construct(protected User $user) {}
}
// ❌ Even worse: Passing raw data arrays
class GenerateInvoice implements ShouldQueue
{
public function __construct(
protected array $invoiceData // could be megabytes of data
) {}
}
// ✅ Correct: Pass IDs and re-fetch inside handle()
class ProcessReport implements ShouldQueue
{
public function __construct(protected int $userId) {}
public function handle(): void
{
// Fresh fetch — no stale data, no huge payload
$user = User::findOrFail($this->userId);
// ...
}
}
⚠️ The SerializesModels trait already helps by only storing the model ID. But if you pass arrays or collections with large data, there's no automatic protection — you must control the payload manually.
Ignoring Failed Jobs
Many developers set up queues and never look at the failed_jobs table.
Silent failures accumulate — thousands of orders not processed, thousands of emails not sent.
// ❌ Anti-pattern: No failed() method, no monitoring
class ProcessPayment implements ShouldQueue
{
public function handle(): void
{
// If this throws, the job fails silently
$this->chargeCard();
}
}
// ✅ Correct: Always implement failed()
class ProcessPayment implements ShouldQueue
{
public int $tries = 3;
public function handle(): void
{
$this->chargeCard();
}
public function failed(\Throwable $exception): void
{
// 1. Update the record so users see the failure
$this->order->update(['status' => 'payment_failed']);
// 2. Alert your team immediately
\Notification::route('slack', config('services.slack.alerts'))
->notify(new PaymentFailedNotification($this->order, $exception));
// 3. Log the full exception
\Log::error('Payment failed', [
'order_id' => $this->order->id,
'exception' => $exception->getMessage(),
'trace' => $exception->getTraceAsString(),
]);
}
}
Using sync Driver in Production
The sync driver runs jobs synchronously in the same request. It exists for local development only.
In production it defeats the entire purpose of queuing — the user still waits, and failures crash the request.
# ❌ In production .env
QUEUE_CONNECTION=sync
# ✅ Use a real queue driver in production
QUEUE_CONNECTION=redis # or database, sqs
In .env.example, set QUEUE_CONNECTION=sync (safe for new devs),
but ensure your production environment variables override this.
Use a deployment checklist or config:cache to catch this.
Non-Idempotent Jobs
Queue systems guarantee at-least-once delivery — not exactly-once.
A job may run more than once due to worker crashes, network issues, or manual retries.
If your job isn't idempotent, running it twice causes real damage.
// ❌ Non-idempotent: Charges the card every time the job runs
class ChargeCustomer implements ShouldQueue
{
public function handle(): void
{
Stripe::charge($this->amount, $this->paymentMethodId);
// If this runs twice, customer is double-charged!
}
}
// ✅ Idempotent: Uses idempotency key to deduplicate at the API level
class ChargeCustomer implements ShouldQueue
{
// Use a stable, unique ID so Stripe deduplicates for you
private string $idempotencyKey;
public function __construct(protected Order $order)
{
// Key based on order ID — same order always = same key
$this->idempotencyKey = 'order-charge-' . $order->id;
}
public function handle(): void
{
// Check first
if ($this->order->fresh()->status === 'paid') {
return; // already charged, skip
}
Stripe::charges()->create([
'amount' => $this->order->total_cents,
'currency' => 'usd',
'source' => $this->order->payment_method_id,
], ['idempotency_key' => $this->idempotencyKey]); // Stripe deduplicates
$this->order->update(['status' => 'paid', 'paid_at' => now()]);
}
}
Dispatching Inside Database Transactions
This is a subtle but catastrophic bug. If you dispatch a job inside a database transaction
and the transaction rolls back, the job was already sent to the queue.
The job runs, tries to find data that was rolled back, and either fails or creates inconsistent state.
// ❌ Anti-pattern: Job dispatched before transaction commits
DB::transaction(function () use ($order) {
$order->update(['status' => 'paid']);
// This job is dispatched NOW — the transaction hasn't committed yet!
// If the transaction rolls back, the job was already sent
ProcessShipment::dispatch($order);
});
// ✅ Correct: Dispatch AFTER the transaction commits
DB::transaction(function () use ($order) {
$order->update(['status' => 'paid']);
// No dispatch here
});
ProcessShipment::dispatch($order); // safe — transaction is committed
// ✅ Alternative: Use afterCommit flag on the job
class ProcessShipment implements ShouldQueue
{
// Wait for the current transaction to commit before sending to queue
public bool $afterCommit = true;
}
Never Restarting Workers After Deploy
When you deploy new code, your workers are still running the old version in memory.
This means:
Jobs using old class structures fail if you changed constructor arguments
Bug fixes don't apply to already-running workers
New job classes don't exist in the old process memory
# ❌ Forgetting to restart workers after deploy
# ✅ Always include this in your deploy script
php artisan queue:restart # signals workers to gracefully restart
# In CI/CD (GitHub Actions, Envoyer, Forge):
- run: php artisan queue:restart
Retry Storms on External APIs
If an external API goes down and you have 10,000 jobs in the queue, all retrying with no backoff,
you create a retry storm that floods the already-struggling API with thousands of requests per second.
This makes recovery slower or impossible.
// ❌ All retries happen immediately — retry storm
class CallStripeApi implements ShouldQueue
{
public int $tries = 10;
// No backoff — all 10 retries happen as fast as possible
}
// ✅ Exponential backoff — gentler on the external service
class CallStripeApi implements ShouldQueue
{
public int $tries = 5;
public array $backoff = [30, 60, 120, 300, 600]; // 30s, 1m, 2m, 5m, 10m
public function middleware(): array
{
// Also throttle total attempts across all workers
return [new ThrottleExceptions(3, 5)];
}
}
Storing Sensitive Data Unencrypted
Queue payloads are stored in your database or Redis in plain text.
Anyone with database access can read job payloads.
Never put tokens, passwords, PII, or payment data in unencrypted job payloads.
// ❌ Sensitive data in plain payload
class ProcessPayment implements ShouldQueue
{
public function __construct(
protected string $cardNumber, // readable in DB!
protected string $cvv, // readable in DB!
protected string $expiryDate
) {}
}
// ✅ Option 1: Pass a token reference, not the actual data
class ProcessPayment implements ShouldQueue
{
public function __construct(
protected string $paymentMethodId // Stripe token — not raw card data
) {}
}
// ✅ Option 2: Encrypt the entire job payload
class ProcessPayment implements ShouldQueue, ShouldBeEncrypted
{
public function __construct(
protected string $cardToken, // encrypted at rest in queue backend
protected int $orderId
) {}
}
No Queue Monitoring
Queues can back up silently. A worker crash, a config change, or a code error can cause
thousands of jobs to pile up while your team is asleep.
No monitoring = you find out when users complain.
// Add queue monitoring to your scheduler (app/Console/Kernel.php)
$schedule->command('queue:monitor redis:50,database:100')
->everyFiveMinutes();
// Fires an event (and optionally a Slack notification) if queue exceeds threshold
// Custom check in a route or scheduler:
$schedule->call(function () {
$failedCount = DB::table('failed_jobs')
->where('failed_at', '>=', now()->subHour())
->count();
if ($failedCount > 20) {
\Notification::route('slack', config('services.slack.alerts'))
->notify(new QueueAlertNotification("$failedCount failed jobs in the last hour"));
}
})->everyFifteenMinutes();
Conclusion
Most queue bugs in production come from a handful of repeatable mistakes.
Run through this checklist for every job you write:
Eager load relationships inside handle(), never rely on lazy loading
Set $timeout on every job that calls external services
Never dispatch in a loop — use a dispatcher job + batch
Pass IDs or small references, not full models or arrays
Always implement failed() — silence is not acceptable in production
Never use sync in production
Make every job idempotent — it will run more than once eventually
Use $afterCommit = true when dispatching inside transactions
Run queue:restart in every deployment
Use exponential backoff — protect struggling external services
Encrypt job payloads that contain sensitive data
Set up queue monitoring — know before your users do