Skip to main content

Events, Queues, and Workers: The Building Blocks

In Article 1 we discovered why event-driven architecture exists — how it moves work off the critical path, makes systems more resilient, and lets you respond to users faster.

But we glossed over something important. When the article said "fire an event and let consumers handle it in the background," you might have wondered: what actually holds that event while the consumer isn't ready? How does the consumer know the event arrived? What stops two consumers from picking up the same event at the same time?

These are exactly the right questions. The answers live in three foundational concepts that every event-driven system is built from: events, queues, and workers.

Let's explore each one properly, with concrete analogies and code. By the end of this article, you'll have the vocabulary to read Inngest's documentation fluently — and understand what it's actually doing for you under the hood.


Quick Reference

Events — records that something happened, carrying data about what occurred.

Queues — durable storage that holds events/jobs between when they're produced and when they're processed.

Workers — functions or processes that pull work from queues and execute it.

The pattern in one sentence: A producer creates an event → the event enters a queue → a worker picks it up and processes it.


What You Need to Know First

Required reading:

You should also be comfortable with:

  • JavaScript/TypeScript basics: async/await, objects, functions
  • The concept of an HTTP request/response cycle

What We'll Cover in This Article

By the end of this guide, you'll understand:

  • What an event is — precisely defined, not loosely used
  • What a queue is, how it works, and why it exists
  • What a worker is and how it picks up jobs
  • The relationship between all three, with a visual mental model
  • The difference between a message queue and a job queue
  • What "at-least-once delivery" means and why it matters
  • How these concepts map directly to Inngest

What We'll Explain Along the Way

We'll define these as we encounter them:

  • Dead letter queues (what happens to jobs that keep failing)
  • Acknowledgements (how a queue knows a job was processed)
  • Concurrency (how many jobs a worker handles at once)
  • Fan-out (one event, many consumers)

Part 1: Events

What is an event, precisely?

We introduced events in the previous article, but let's sharpen the definition.

An event is a lightweight, immutable record that describes a state change that has already occurred. Three words in that definition matter most:

Immutable — once an event is created, it never changes. It describes what happened at a moment in time. You don't edit events; you just create new ones.

Past tense — events describe things that have happened, not things you want to happen. user.signed_up, not sign_up_user. payment.failed, not fail_payment. This isn't just naming convention — it's a fundamental design signal. An event is a fact. What happens because of that fact is someone else's concern.

State change — as Wikipedia's EDA definition puts it, an event represents "a significant change in state." Something that was one way is now a different way. That change is worth recording.

Here's what a well-structured event looks like in TypeScript:

// A well-formed event: descriptive name, structured data, metadata
const event = {
// What happened — past tense, namespaced by domain
name: "user.signed_up",

// When it happened
timestamp: "2025-03-15T14:22:00.000Z",

// A unique ID for this specific occurrence
id: "evt_01J8G4470QYGE0DH65PZM8DPM",

// The data describing what happened
data: {
userId: "usr_01J8G44701QYGE0DH65PZM8DPM",
email: "alex@example.com",
name: "Alex Chen",
plan: "free_trial",
referralSource: "organic",
},
};

Notice the name field uses a dot-separated format: domain.action. This is a widely-used convention that makes events scannable. You can look at a list of event names and immediately understand what each one represents:

user.signed_up
user.email_verified
user.subscription_upgraded
order.placed
order.shipped
order.delivered
payment.succeeded
payment.failed
payment.refunded

What an event is NOT

It's worth being precise about what events are not, because the word gets overloaded.

An event is not a command. A command says "please do this." An event says "this happened." send_welcome_email is a command. user.signed_up is an event. The distinction matters because a command has one intended recipient, while an event can be picked up by any number of consumers who care about it.

An event is not a request. It doesn't wait for a response. You fire it and move on. Any processing that follows is decoupled from the moment the event was created.

An event is not ephemeral. It should be stored somewhere durable. If a consumer is offline when the event fires, the event shouldn't just disappear — it should wait until the consumer is ready.

That last point is exactly why queues exist.


Part 2: Queues

The problem that queues solve

Imagine you've just fired a user.signed_up event and you want the email service to pick it up and send a welcome email.

Here's a naive approach: call the email service directly.

// Naive approach: call the email service directly
async function handleSignup(userData: UserData) {
const user = await db.users.create(userData);

// Directly call the email service
await emailService.sendWelcome(user); // What if this is down?

return { userId: user.id };
}

We already know the problems with this: the user waits, and if emailService is down, the call fails. But what's the fix?

You could try to keep retrying in a loop. But then your API endpoint is blocked while it retries. And if your server restarts in the middle of a retry loop, you lose track of everything.

What you need is a persistent middleman — something that holds onto the work, survives server restarts, and hands it to a worker whenever one is available.

That's a queue.

What a queue is

A queue is a durable data store that holds jobs (or messages, or events — the vocabulary varies) in an ordered list. Producers add things to the queue. Workers take things off the queue and process them.

The classic analogy, used well by OpenReplay's job queues guide, is a restaurant kitchen:

The waiter (your API) takes orders and passes them to the kitchen (the queue). Cooks (workers) prepare dishes at their own pace. Customers don't stand at the stove waiting.

The waiter doesn't cook the food. The kitchen doesn't greet the customer. They're separate concerns, linked by a slip of paper in a ticket rail. That slip of paper is the queue entry.

Here's what this looks like in code when thinking abstractly (before we introduce Inngest):

// Producer side — your API endpoint
async function handleSignup(userData: UserData) {
const user = await db.users.create(userData);

// Instead of calling the email service directly,
// we add a job to a queue
await queue.add("send-welcome-email", {
userId: user.id,
email: user.email,
name: user.name,
});

// Respond immediately — the queue holds the job
return { userId: user.id };
}

// Worker side — a separate process that runs continuously
worker.process("send-welcome-email", async (job) => {
await emailService.sendWelcome({
userId: job.data.userId,
email: job.data.email,
name: job.data.name,
});
});

The API endpoint and the email worker are now completely decoupled. The API doesn't need to know when the email gets sent — only that the job was added to the queue. The worker doesn't need to know how the user signed up — only what data it needs to send the email.

How a queue actually works under the hood

Let's slow down and trace what happens step by step when a job enters a queue:

Step 1: Producer adds a job
─────────────────────────────────
Your API calls queue.add(...)
The queue stores the job in a database (Redis, PostgreSQL, etc.)
The job is now in "waiting" state
The API responds to the user — job hand-off complete

Step 2: Worker picks up the job
─────────────────────────────────
The worker is polling (or listening for) the queue
It sees a job in "waiting" state
It "claims" the job — moves it to "active" state
No other worker can claim this same job now

Step 3: Worker processes the job
─────────────────────────────────
The worker runs your function with the job's data
If it succeeds → job moves to "completed" state
If it throws an error → job moves to "failed" state (or retry)

Step 4: Acknowledgement
─────────────────────────────────
When the worker is done, it "acknowledges" the job
The queue removes it from active and marks it complete
If the worker crashes before acknowledging, the queue
re-queues the job for another worker to try

That last point — about re-queuing on crash — is crucial. Let's look at it more carefully.

Acknowledgements: how the queue knows you're done

When a worker picks up a job, the queue doesn't just delete it immediately. It keeps it in an "active" (or "in-flight") state until the worker sends an acknowledgement saying "done."

RabbitMQ's documentation describes the mechanism clearly: if a worker dies — its connection closes, the process crashes, the server restarts — without sending an acknowledgement, the queue understands that the message wasn't fully processed and re-queues it for another worker.

This is what makes queues durable. Jobs don't disappear because a worker crashed. They're retried.

// Pseudocode showing the acknowledgement lifecycle

queue.add("resize-image", { imageId: "img_123", width: 800 });
// Queue state: [{ id: "job_001", data: {...}, status: "waiting" }]

// Worker picks it up
// Queue state: [{ id: "job_001", data: {...}, status: "active", workerId: "w1" }]

// If worker crashes here → queue re-queues after timeout
// If worker succeeds → it calls ack()
// Queue state: [{ id: "job_001", data: {...}, status: "completed" }]

At-least-once delivery — and why it matters

Here's a subtlety that catches developers off guard.

Most queue systems guarantee at-least-once delivery: every job will be processed at least once. Not exactly once. At least once.

Why? Because of the acknowledgement gap. Consider this scenario:

1. Worker picks up job "send-invoice"
2. Worker sends the invoice email ✅
3. Worker is about to send the acknowledgement...
4. ...but the server crashes before the ack is sent ❌

What happens?
- The queue never got the acknowledgement
- The queue sees the job as "active" with no response
- After a timeout, it re-queues the job
- Another worker picks it up and sends the invoice again

The user gets two invoices. Not ideal.

This is why OpenReplay's job queues guide emphasises that your job handlers should be idempotent: running them twice should produce the same result as running them once.

We'll cover idempotency in depth in Article 10: Idempotency — Running Functions Safely More Than Once (coming soon). For now, just file this away: queues might run your job more than once, and your code should be written expecting that.

Message queues vs. job queues — what's the difference?

You'll encounter both terms, and they're related but not identical.

As GeeksforGeeks explains and OpenReplay's guide clarifies:

A message queue is a general communication channel between services. It's focused on reliable delivery of messages from one system to another. Examples: RabbitMQ, Apache Kafka, Amazon SQS. These are infrastructure-level tools.

A job queue (or task queue) is built on top of the message queue concept but adds task-specific features: retries with backoff, scheduling, progress tracking, priority, concurrency limits, dead letter handling. Examples: BullMQ, Celery, Sidekiq.

You can think of it as a spectrum:

Raw message passing    ←————————————————→    Full workflow engine
(Kafka, SQS) Job queues (Inngest, Temporal)
(BullMQ, Celery)

Inngest sits toward the right of this spectrum — it's not just a queue, it's a complete workflow platform. But understanding queues is still essential, because they're what Inngest is built on under the hood.

Dead letter queues: what happens when a job keeps failing

One more queue concept worth knowing before we move on.

What if a job fails, retries, fails again, retries again... and keeps failing? After a configurable number of attempts, most queue systems move it to a dead letter queue (DLQ) — a special holding area for jobs that couldn't be processed.

Jobs in the DLQ don't just vanish. They sit there, waiting for a human to investigate. You might inspect a failed job, find that the data was malformed, fix the underlying bug, and re-queue the job manually.

Normal flow:
Job added → Worker processes → ✅ Completed

Retry flow:
Job added → Worker fails → Retry #1 → Fail → Retry #2 → Fail → ...

Dead letter flow:
... → Retry #5 → Fail → Moved to Dead Letter Queue
→ Developer investigates → Bug fixed → Job re-queued manually → ✅

This is an important safety net. Without a DLQ, a malformed job could retry indefinitely, clogging your queue and blocking other work.


Part 3: Workers

What is a worker?

A worker is a process or function that continuously listens to a queue, picks up jobs, and executes them.

BullMQ's documentation defines it plainly: workers are "the actual instances that perform some job based on the jobs that are added in the queue." If the job succeeds, it moves to completed. If it throws an exception, it moves to failed.

Think of a worker as a dedicated, patient employee whose only job is to check the "to-do" pile, do the next task, and repeat. It doesn't know or care where the tasks came from. It just processes whatever is in the queue.

// A simple worker — continuously picks up and processes jobs
const worker = new Worker("email-jobs", async (job) => {
// job.name tells us what kind of job this is
// job.data contains the payload

if (job.name === "send-welcome-email") {
await emailService.sendWelcome({
to: job.data.email,
name: job.data.name,
});
}

if (job.name === "send-invoice") {
await emailService.sendInvoice({
to: job.data.email,
orderId: job.data.orderId,
});
}

// When this function returns without throwing,
// the queue marks the job as completed
});

Workers run in parallel

You're not limited to one worker. You can run many workers simultaneously — either as multiple processes, multiple server instances, or a single process with high concurrency.

Microsoft's Azure architecture guide describes the core benefit: "The web front end and worker are both stateless. You can store session state in a distributed cache... the worker handles long-running work asynchronously."

Because workers are stateless — they don't hold any context between jobs — you can scale them independently of your API. If your queue is growing faster than workers can process it, you spin up more workers:

Queue: [job1, job2, job3, job4, job5, job6, job7, job8...]

One worker: |job1|→|job2|→|job3|→|job4|→... (slow)

Four workers: |job1|→|job5|→...
|job2|→|job6|→...
|job3|→|job7|→...
|job4|→|job8|→... (4x faster)

Concurrency: how many jobs one worker handles at once

Workers can also process multiple jobs at the same time within a single process, using async concurrency. A worker with concurrency: 10 can handle 10 jobs simultaneously — important for I/O-heavy work like sending emails or making API calls, where most of the time is spent waiting for a response rather than executing code.

// This single worker handles up to 10 jobs simultaneously
const worker = new Worker("email-jobs", processJob, {
concurrency: 10,
// While one job waits for email service to respond,
// other jobs can be running concurrently
});

Be careful not to set concurrency too high for CPU-heavy tasks (like image processing), where running 10 jobs simultaneously would actually slow each one down by competing for CPU time.

Graceful shutdown: why it matters

Workers should finish their current job before stopping. If you deploy a new version of your app and your worker process is killed mid-job, what happens to that job?

If the worker hasn't sent an acknowledgement yet, the queue re-queues the job. If your job is idempotent, this is fine — it'll just run again. If it's not idempotent (say, it's already charged a customer's card partway through), a re-run could cause problems.

This is another reason idempotency matters — and why Inngest's step system, which we'll cover in Article 5: Steps — Breaking Work into Durable Units (coming soon), is such a valuable abstraction. Steps are checkpointed, so if a worker dies mid-function, execution resumes from the last completed step rather than restarting from scratch.


Part 4: How All Three Work Together

Let's put the three pieces together with a complete, annotated example. We'll trace a user.signed_up event from the moment it's fired to the moment all its downstream work is done.

┌─────────────────────────────────────────────────────────────────┐
│ API Layer (Producer) │
│ │
│ POST /api/signup │
│ 1. Save user to DB ───────────────────────────────────────┐ │
│ 2. Add event to queue: { name: "user.signed_up", data: {} }│ │
│ 3. Respond: 200 OK ← user gets this immediately │ │
└─────────────────────────────────────────────────────────────┘


┌─────────────────────────────────────────────────────────────────┐
│ Queue (Durable Storage) │
│ │
│ Holds the event persistently │
│ Survives server restarts, crashes │
│ Distributes to available workers │
│ Tracks job status: waiting → active → completed/failed │
└─────────────────────────────────────────────────────────────────┘
│ │ │
▼ ▼ ▼
┌──────────────┐ ┌──────────────────┐ ┌──────────────────┐
│ Worker A │ │ Worker B │ │ Worker C │
│ │ │ │ │ │
│ Sends │ │ Sets up billing │ │ Posts to Slack │
│ welcome │ │ trial │ │ │
│ email │ │ │ │ │
│ │ │ Retries 3x if │ │ Completes in │
│ Completes │ │ billing API is │ │ ~200ms │
│ in ~800ms │ │ slow │ │ │
└──────────────┘ └──────────────────┘ └──────────────────┘

Three workers, running in parallel, each independently processing their own piece of the work triggered by a single event. If Worker B fails because the billing API is down, Workers A and C succeed normally. Worker B retries on its own, without affecting anyone else.

This is the power of decoupled, queue-backed workers.

The fan-out pattern in one picture

What we just described is called fan-out: one event, many consumers. The event is published once. The queue delivers it to all consumers who have registered interest in user.signed_up.

Gábor Farkas's guide to queues and workflows describes this: "Messaging systems can also support fan-out to distribute copies of certain messages to multiple consumers or consumer groups. This is useful when multiple systems need to act on the same message."

We'll cover the fan-out pattern in detail in Article 7: Fan-Out — Triggering Multiple Tasks from One Event (coming soon).


Part 5: How This Maps to Inngest

Now that you understand the raw building blocks, let's see how Inngest expresses them — because Inngest uses slightly different vocabulary that maps directly to what we've covered.

Raw conceptInngest equivalent
Eventinngest.send({ name: "...", data: {} })
QueueInngest's internal durable event stream
Workerinngest.createFunction(...)
Consumer registrationThe { event: "user.signed_up" } trigger
Job dataevent.data inside the function
Retry config{ retries: 3 } in function options
Stepsstep.run(...) — checkpointed units inside a function

Here's the signup example from Article 1, now written in actual Inngest code. Don't worry about the syntax details yet — that's Article 4. Just notice how cleanly the concepts map:

import { inngest } from "./inngest-client";

// PRODUCER: the API endpoint fires an event
// POST /api/signup
export async function POST(req: Request) {
const userData = await req.json();

// Save user — this is the critical path
const user = await db.users.create(userData);

// Fire the event — hand it to Inngest's queue
await inngest.send({
name: "user.signed_up", // ← event name
data: {
// ← event payload
userId: user.id,
email: user.email,
name: user.name,
},
});

// Respond immediately
return Response.json({ userId: user.id });
}

// WORKER (Consumer A): sends the welcome email
export const sendWelcomeEmail = inngest.createFunction(
{ id: "send-welcome-email" },
{ event: "user.signed_up" }, // ← registers interest in this event
async ({ event }) => {
await emailService.sendWelcome({
to: event.data.email, // ← accesses event payload
name: event.data.name,
});
},
);

// WORKER (Consumer B): sets up the billing trial
export const setupBillingTrial = inngest.createFunction(
{ id: "setup-billing-trial" },
{ event: "user.signed_up" }, // ← same event, different consumer
async ({ event }) => {
await billingService.createTrial(event.data.userId);
},
);

// WORKER (Consumer C): notifies Slack
export const notifySlack = inngest.createFunction(
{ id: "notify-slack-signup" },
{ event: "user.signed_up" }, // ← all three listen to the same event
async ({ event }) => {
await slack.notify(`New signup: ${event.data.email}`);
},
);

Three functions, all listening to user.signed_up, all running in parallel as independent workers. Inngest handles the queue, the fan-out, the retries, and the durability — you just write the functions.


Common Misconceptions

❌ Misconception: A queue is just an array

Reality: A queue has to survive crashes, server restarts, and network failures. A JavaScript array lives in memory and is gone the moment your process exits. A real queue persists to disk or a database (Redis, PostgreSQL, etc.), which is what makes it reliable.

Why this matters: If you store jobs in an in-memory array "for now" while prototyping, you will lose jobs the moment your server restarts — and it will restart, during deployments if nothing else.

❌ Misconception: One event, one consumer

Reality: Multiple consumers can listen to the same event. This is one of the most powerful features of event-driven systems. When you fire order.placed, your email worker, inventory worker, warehouse worker, and analytics worker can all pick it up — independently, in parallel, with independent retry logic.

❌ Misconception: Workers are always running

Reality: In serverless environments (like Vercel, Cloudflare Workers, or AWS Lambda), your "workers" are actually functions that spin up on demand when an event arrives. Inngest is designed to work in this model — it calls your functions over HTTPS when an event needs processing, so you don't need a long-running server.

❌ Misconception: Queues guarantee order

Reality: Most queues are roughly FIFO (first in, first out), but they don't guarantee strict ordering — especially when multiple workers process in parallel. If strict order matters for your use case (like processing payments for a user one at a time), you need to configure your queue with concurrency limits per entity. Inngest supports this with flow control, covered in Article 3: What Inngest Is and How It Fits In.


Troubleshooting Common Issues

Problem: Jobs are running twice

Symptoms: Customers receive duplicate emails, orders are double-charged, duplicate records appear in the database.

Common causes:

  1. Worker crashes after completing work but before acknowledging the job (~70% of cases)
  2. Network timeout causes the queue to think the job failed when it succeeded (~20%)
  3. Two workers picked up the same job due to a race condition (~10%)

Solution: Make your job handlers idempotent. Check if the work was already done before doing it:

// ❌ Not idempotent — sends duplicate emails
async function sendWelcomeEmail(job) {
await emailService.send({ to: job.data.email });
}

// ✅ Idempotent — checks if already sent first
async function sendWelcomeEmail(job) {
const alreadySent = await db.emailLog.find({
userId: job.data.userId,
type: "welcome",
});

if (alreadySent) {
console.log("Welcome email already sent, skipping.");
return;
}

await emailService.send({ to: job.data.email });
await db.emailLog.create({ userId: job.data.userId, type: "welcome" });
}

Problem: Jobs are piling up in the queue faster than workers can process them

Symptoms: Queue depth grows over time, jobs take longer and longer to start, eventually causing delays visible to users.

Diagnostic steps:

// Step 1: Check queue depth
const waiting = await queue.getWaiting();
console.log(`Jobs waiting: ${waiting.length}`);

// Step 2: Check worker concurrency
// Are workers processing one job at a time when they could do more?

// Step 3: Check for slow individual jobs
// Is one job type taking much longer than expected?

Solutions:

  • Increase worker concurrency (if jobs are I/O-bound)
  • Add more worker instances
  • Identify and optimise slow jobs
  • Use priority queues to ensure critical jobs aren't delayed

Problem: A job keeps failing but you don't know why

Symptoms: A job appears in the dead letter queue, but the error message isn't helpful.

Solution: Add structured error logging inside your worker, and use Inngest's run dashboard to inspect the full execution trace:

// Add context to your errors so the DLQ gives you actionable info
async function processPayment(job) {
try {
await paymentService.charge({
userId: job.data.userId,
amount: job.data.amount,
});
} catch (error) {
// Log with context before re-throwing
console.error("Payment failed", {
userId: job.data.userId,
amount: job.data.amount,
errorCode: error.code,
errorMessage: error.message,
});
throw error; // Re-throw so queue marks as failed and retries
}
}

Check Your Understanding

Quick Quiz

1. What is the difference between a message queue and a job queue?

Show Answer

A message queue is a general-purpose communication channel between services — focused on reliable delivery. A job queue is built on top of that concept and adds task-specific features: retries with exponential backoff, scheduling, priority, concurrency limits, progress tracking, and dead letter queues. Think of a message queue as infrastructure (like a postal service) and a job queue as a task management system built on top of it.

2. A worker picks up a job, successfully sends an email, but crashes before sending the acknowledgement. What happens next?

Show Answer

The queue sees the job as "active" with no acknowledgement. After a timeout (usually configurable), it re-queues the job. Another worker picks it up and runs the function again — potentially sending a second email. This is "at-least-once delivery." To prevent duplicate emails, your function should check whether the email was already sent before sending it (idempotency).

3. What's wrong with this approach?

// On startup, load all pending jobs into memory
const pendingJobs = await db.getPendingJobs();
const jobArray = [...pendingJobs];

// Process them one by one
while (jobArray.length > 0) {
const job = jobArray.shift();
await processJob(job);
}
Show Answer

Several problems:

  1. Not durable — jobs loaded into jobArray are in memory. If the server crashes mid-loop, all those jobs are lost forever.
  2. Not scalable — this single while loop processes one job at a time. You can't add a second worker to help without them both pulling the same jobs.
  3. Not observable — there's no retry logic, no dead letter queue, and no way to inspect what's happening.
  4. Misses new jobs — once pendingJobs is loaded, any new jobs added after startup are missed until the next restart.

This is exactly the kind of thing a proper queue solves.

Hands-On Exercise

Think through the design for a file upload feature:

A user uploads a video file. The system needs to:

  1. Store the raw video
  2. Transcode it to multiple resolutions (720p, 1080p, 4K)
  3. Generate a thumbnail
  4. Send a "your video is ready" email
  5. Update the video's status in the database

Questions to answer:

  • Which step(s) are on the critical path?
  • How many workers/consumers would you create?
  • What would you name the event(s)?
  • What data would each event carry?
  • What should happen if the transcoder fails? Should it retry? How many times?
See a Suggested Design

Critical path (must complete before responding to user):

  • Step 1 only: store the raw video. Respond immediately with { videoId: "...", status: "processing" }.

Events:

  • video.uploaded — fires when the raw file is stored, carries { videoId, userId, filename, fileSize, storageUrl }
  • video.transcoding_complete — fires when all resolutions are done, triggers thumbnail + email

Workers:

  • Worker 1: listens to video.uploaded → transcodes to 720p
  • Worker 2: listens to video.uploaded → transcodes to 1080p
  • Worker 3: listens to video.uploaded → transcodes to 4K
  • Worker 4: listens to video.transcoding_complete → generates thumbnail
  • Worker 5: listens to video.transcoding_complete → sends email

Retry strategy:

  • Transcoding: retry up to 3 times with exponential backoff — transient failures are likely (memory pressure, external service issues)
  • Email: retry up to 5 times — email services are occasionally slow but almost always recover
  • Thumbnail generation: retry up to 3 times

Summary: Key Takeaways

  • Events are immutable, past-tense records of state changes. They carry data and have unique IDs. They are not commands or requests.
  • Queues are durable, persistent stores that hold jobs between producers and workers. They survive crashes, handle acknowledgements, and provide retry logic.
  • Workers are processes or functions that pull jobs from queues and execute them. They can run in parallel and scale independently.
  • At-least-once delivery means jobs may run more than once. Write your handlers to be idempotent — running twice should be safe.
  • Dead letter queues catch jobs that fail permanently after all retries. They're your safety net for investigating failures.
  • Inngest implements all three concepts as a unified platform: inngest.send() produces events, Inngest's platform queues them durably, and inngest.createFunction() creates workers that consume them.

What's Next?

You now have the full vocabulary of event-driven systems. Events, queues, workers, producers, consumers, acknowledgements, fan-out, dead letters — you know what all of these mean and why they exist.

In Article 3: Inngest — What It Is and How It Fits In, we'll position Inngest precisely in this landscape. It's not just a queue. It's not just a job runner. And understanding exactly what makes it different will make everything in the practical articles make much more sense.


Version Information

Concepts covered are universal — they apply to any queue system (BullMQ, Celery, Sidekiq, SQS, RabbitMQ) and are not specific to any version.

Further reading: