Copied!
Programming
Laravel
PHP

Laravel Unique Jobs & Encrypted Jobs – Stop Duplicate Processing

Laravel Unique Jobs & Encrypted Jobs – Stop Duplicate Processing
Shahroz Javed
Mar 15, 2026 . 71 views

The Duplicate Job Problem

Imagine a user clicks "Send Invoice" and something causes a slow network response — so they click again. Your controller dispatches the SendInvoice job twice. Two emails go out. The customer is confused. Your company looks unprofessional.

Or consider a webhook that fires every time a payment status changes. If the payment provider retries the webhook, you process the same payment twice. That's a financial bug.

These are real production problems. Laravel provides built-in tools to solve them: unique jobs and job deduplication.

ShouldBeUnique Interface

The simplest solution is to implement ShouldBeUnique on your job class. Laravel will refuse to dispatch a second copy of the job if one is already in the queue.

<?php

namespace App\Jobs;

use Illuminate\Contracts\Queue\ShouldBeUnique;
use Illuminate\Contracts\Queue\ShouldQueue;
use Illuminate\Bus\Queueable;
use Illuminate\Foundation\Bus\Dispatchable;
use Illuminate\Queue\InteractsWithQueue;
use Illuminate\Queue\SerializesModels;

class GenerateSitemap implements ShouldQueue, ShouldBeUnique
{
    use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;

    public function handle(): void
    {
        // regenerate the sitemap
    }
}

Now if you dispatch GenerateSitemap multiple times rapidly, only the first one will enter the queue. The rest are silently discarded.

How it Works Under the Hood

When you dispatch a unique job, Laravel tries to acquire an atomic cache lock using a key derived from the job class name. If the lock is already held (meaning the job is already queued), the new dispatch is ignored. The lock is released when the job is processed or fails.

Custom uniqueId – Per-Model Uniqueness

By default, ShouldBeUnique uses the job class name as the unique key. This means only one instance of the entire job class can be queued at a time.

That's too strict. You often want uniqueness per entity — one job per product, one per user, etc. Define a uniqueId() method to return a unique identifier:

class UpdateProductInventory implements ShouldQueue, ShouldBeUnique
{
    use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;

    public function __construct(protected Product $product) {}

    // Unique per product — multiple products can be queued simultaneously
    public function uniqueId(): string
    {
        return (string) $this->product->id;
    }

    public function handle(): void
    {
        // update inventory for this product
    }
}

Now you can dispatch UpdateProductInventory for product #5 and product #12 simultaneously — they have different unique IDs. But dispatching it twice for product #5 will drop the second one.

// This works — different products
UpdateProductInventory::dispatch($product5);
UpdateProductInventory::dispatch($product12);

// Second dispatch for same product is silently ignored
UpdateProductInventory::dispatch($product5);
UpdateProductInventory::dispatch($product5); // ignored

uniqueFor – Time-Windowed Uniqueness

By default, the uniqueness lock is held until the job finishes processing. The $uniqueFor property lets you set a time limit (in seconds) for how long the lock is held. After that window, the same job can be dispatched again even if the first one hasn't run yet.

This is useful to prevent a backlog of retries from blocking new dispatches indefinitely:

class SyncUserToExternalCrm implements ShouldQueue, ShouldBeUnique
{
    use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;

    // The uniqueness lock expires after 1 hour
    public int $uniqueFor = 3600;

    public function __construct(protected User $user) {}

    public function uniqueId(): string
    {
        return (string) $this->user->id;
    }

    public function handle(): void
    {
        // sync user data to CRM
    }
}

Practical scenario: a user updates their profile 10 times in a minute. With $uniqueFor = 3600 and a uniqueId per user, only the first dispatch runs per hour. You avoid hammering the CRM with redundant updates.

uniqueVia – Custom Cache Driver for Locks

By default, Laravel uses the application's default cache driver to store unique job locks. For unique jobs to work correctly at scale, the lock must be stored in a shared, atomic cache — like Redis. If you use a file-based or array cache driver, uniqueness won't work across multiple servers.

Use uniqueVia() to explicitly set which cache driver handles the locks:

use Illuminate\Contracts\Cache\Repository;
use Illuminate\Support\Facades\Cache;

class SyncUserToExternalCrm implements ShouldQueue, ShouldBeUnique
{
    // ...

    public function uniqueVia(): Repository
    {
        // Always use Redis for the lock, regardless of default cache driver
        return Cache::driver('redis');
    }
}
⚠️ If you run multiple queue workers across multiple servers (horizontal scaling), you MUST use a shared cache driver like Redis for unique job locks. File or array cache is local to each server — locks won't be shared and duplicates will slip through.

ShouldBeUniqueUntilProcessing

There's a subtle difference between two unique behaviors:

  • ShouldBeUnique — holds the lock until the job finishes processing. A new copy can't be dispatched while the job is in the queue OR while it's running.

  • ShouldBeUniqueUntilProcessing — releases the lock as soon as the job starts processing. A new copy can be dispatched while the first one is still running.

use Illuminate\Contracts\Queue\ShouldBeUniqueUntilProcessing;

class SendDailyReport implements ShouldQueue, ShouldBeUniqueUntilProcessing
{
    use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;

    public function uniqueId(): string
    {
        return 'daily-report-' . now()->format('Y-m-d');
    }

    public function handle(): void
    {
        // generate and email the daily report
        // Lock is released as soon as this starts, so a new one can be queued for tomorrow
    }
}

Use ShouldBeUniqueUntilProcessing when you want to allow re-queuing of a job while the previous run is still going — for example, a long-running report that you want to allow scheduling again while the current one executes.

WithoutOverlapping Middleware

The WithoutOverlapping job middleware prevents concurrent processing of the same job — meaning two workers can't run the same job at the same time. This is different from ShouldBeUnique, which prevents queuing duplicates.

Use WithoutOverlapping when you want jobs of the same type to queue normally but only run one at a time per entity:

use Illuminate\Queue\Middleware\WithoutOverlapping;

class ProcessUserData implements ShouldQueue
{
    public function __construct(protected int $userId) {}

    public function middleware(): array
    {
        // Only one ProcessUserData job per user can run at a time
        return [new WithoutOverlapping($this->userId)];
    }

    public function handle(): void
    {
        // process user data...
    }
}

Release vs Skip on Overlap

// Release the job back to the queue to try again in 10 seconds (default behavior)
return [new WithoutOverlapping($this->userId)];

// Or skip the job entirely if another is already running (don't retry)
return [(new WithoutOverlapping($this->userId))->dontRelease()];

ShouldBeUnique vs WithoutOverlapping

  • ShouldBeUnique — prevents the same job from being queued multiple times. Only one copy ever exists in the queue at a time.

  • WithoutOverlapping — allows multiple copies in the queue, but only lets one process at a time. Others wait or are released back.

Encrypted Jobs

Queue jobs are serialized and stored in your queue backend (database, Redis, SQS). By default, the payload is stored in plain text. If your job contains sensitive data — like personal information, tokens, or API keys — you should encrypt the job payload.

Implement ShouldBeEncrypted to automatically encrypt the job before storing it:

use Illuminate\Contracts\Queue\ShouldBeEncrypted;

class ProcessPaymentData implements ShouldQueue, ShouldBeEncrypted
{
    use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;

    public function __construct(
        protected string $cardToken,
        protected float  $amount,
        protected string $customerId
    ) {}

    public function handle(): void
    {
        // process payment with $this->cardToken
    }
}

Laravel uses your application's APP_KEY (from .env) to encrypt and decrypt the payload. The job is encrypted before being stored and decrypted when the worker picks it up.

⚠️ If you rotate your APP_KEY while encrypted jobs are in the queue, those jobs will fail to decrypt and be lost. Always drain (empty) your queues before rotating the application key.

When to Use Encrypted Jobs

  • Job contains payment tokens or card data

  • Job contains personal identifiable information (PII) that must be protected at rest

  • Your queue backend (SQS, Redis) is shared across teams or environments

  • Regulatory compliance requires data encryption at rest (GDPR, HIPAA)

Conclusion

Unique and encrypted jobs solve real production problems that most basic tutorials skip over entirely. Here's what to remember:

  • Use ShouldBeUnique to prevent duplicate jobs from entering the queue

  • Define uniqueId() for per-entity uniqueness (per user, per product, etc.)

  • Use $uniqueFor to set a time window after which the same job can be re-queued

  • Always use uniqueVia() with Redis in multi-server deployments

  • Use ShouldBeUniqueUntilProcessing when you want to allow re-queuing once processing starts

  • Use WithoutOverlapping middleware when you want only one job of a type running at a time, but multiple can queue up

  • Use ShouldBeEncrypted whenever your job payload contains sensitive or regulated data

📑 On This Page