Async Processing Patterns
Not every background job is the same. A welcome email doesn’t need the same coordination as a multi-step data pipeline. Picking the wrong pattern creates unnecessary complexity or gaps in reliability.
This guide covers four async processing patterns and how to implement each one with AsyncQueue.
Step 1: Fire-and-Forget
The simplest pattern: send a task and move on without waiting for the result.
// Send a welcome email — you don't need the result
app.post('/api/signup', async (req, res) => {
const user = await createUser(req.body);
await aq.tasks.create({
callbackUrl: 'https://your-app.com/api/send-welcome-email',
payload: { userId: user.id, email: user.email },
});
res.json({ userId: user.id }); // respond immediately
});
No webhookUrl needed — AsyncQueue calls your endpoint and that’s the end of the flow.
Best for:
- Sending emails and push notifications
- Logging and analytics events
- Cache warming and precomputation
- Cleanup jobs (deleting temp files, expiring sessions)
Trade-off: You won’t know about failures unless you check the AsyncQueue dashboard or set up a dead-letter queue webhook.
Adding a failure webhook
If you want fire-and-forget but still need to know about failures:
await aq.tasks.create({
callbackUrl: 'https://your-app.com/api/send-welcome-email',
payload: { userId: user.id, email: user.email },
webhookUrl: 'https://your-app.com/api/on-task-failed',
retries: 3,
});
The webhook fires only after all retries are exhausted, so you only hear about permanent failures.
Step 2: Request-Reply with Webhooks
Offload work and receive the result asynchronously. This is the most common pattern for user-facing features.
// User requests a report — respond with a job ID, deliver the result later
app.post('/api/reports', async (req, res) => {
const jobId = crypto.randomUUID();
await saveToDatabase({ id: jobId, status: 'pending' });
await aq.tasks.create({
callbackUrl: 'https://your-app.com/api/generate-report',
payload: { jobId, params: req.body },
webhookUrl: 'https://your-app.com/api/on-report-ready',
retries: 2,
timeout: 120,
});
res.json({ jobId, status: 'pending' });
});
// Callback: do the work
app.post('/api/generate-report', async (req, res) => {
const { jobId, params } = req.body;
const report = await buildReport(params);
res.json({ jobId, reportUrl: report.url });
});
// Webhook: receive the result
app.post('/api/on-report-ready', async (req, res) => {
const { jobId, reportUrl } = req.body.result;
await updateDatabase(jobId, { status: 'completed', reportUrl });
await notifyUser(jobId, 'Your report is ready.');
res.status(200).json({ received: true });
});
Best for:
- Report and export generation
- AI/ML inference (image generation, LLM calls)
- Payment processing
- Any user-triggered operation that takes more than a few seconds
Trade-off: More moving parts (callback + webhook endpoints), but you gain reliable delivery and automatic retries.
Step 3: Fan-Out / Fan-In
Split a large job into parallel tasks, then aggregate the results. Use this when work can be decomposed into independent chunks.
Fan-out: create parallel tasks
app.post('/api/analyze-portfolio', async (req, res) => {
const { stocks } = req.body;
const batchId = crypto.randomUUID();
await saveToDatabase({
id: batchId,
totalTasks: stocks.length,
completedTasks: 0,
results: [],
});
// Create one task per stock — they all run concurrently
for (const symbol of stocks) {
await aq.tasks.create({
callbackUrl: 'https://your-app.com/api/analyze-stock',
payload: { batchId, symbol },
webhookUrl: 'https://your-app.com/api/on-stock-analyzed',
retries: 2,
timeout: 30,
});
}
res.json({ batchId, status: 'processing', totalTasks: stocks.length });
});
Fan-in: aggregate results
app.post('/api/on-stock-analyzed', async (req, res) => {
const { batchId, symbol, analysis } = req.body.result;
const batch = await getFromDatabase(batchId);
batch.results.push({ symbol, analysis });
batch.completedTasks += 1;
if (batch.completedTasks === batch.totalTasks) {
// All tasks done — build the final report
const portfolio = aggregateAnalysis(batch.results);
batch.status = 'completed';
batch.portfolioSummary = portfolio;
await notifyUser(batchId, 'Portfolio analysis complete.');
}
await saveToDatabase(batch);
res.status(200).json({ received: true });
});
Best for:
- Analyzing multiple items in parallel (stocks, products, URLs)
- Batch image or video processing
- Sending bulk notifications (one task per recipient segment)
- Running the same test against multiple environments
Trade-off: You must track completion state yourself. Use atomic database operations (e.g., INCR in Redis) to avoid race conditions when multiple webhooks arrive at once.
Step 4: Task Chaining (Sequential Pipelines)
Build multi-step workflows where each step depends on the previous step’s output. The webhook from one task triggers creation of the next.
// Step 1: Start the pipeline
app.post('/api/onboard-user', async (req, res) => {
const pipelineId = crypto.randomUUID();
await saveToDatabase({ id: pipelineId, step: 1, status: 'running' });
await aq.tasks.create({
callbackUrl: 'https://your-app.com/api/pipeline/verify-identity',
payload: { pipelineId, userId: req.body.userId },
webhookUrl: 'https://your-app.com/api/pipeline/on-step-done',
retries: 2,
});
res.json({ pipelineId, status: 'running' });
});
// Central webhook: route to the next step
app.post('/api/pipeline/on-step-done', async (req, res) => {
const { pipelineId, step, result } = req.body.result;
const pipeline = await getFromDatabase(pipelineId);
const steps = [
null, // index 0 unused
{ callback: 'verify-identity', next: 2 },
{ callback: 'run-credit-check', next: 3 },
{ callback: 'provision-account', next: null },
];
const nextStep = steps[step]?.next;
if (nextStep) {
await aq.tasks.create({
callbackUrl: `https://your-app.com/api/pipeline/${steps[nextStep].callback}`,
payload: { pipelineId, step: nextStep, previousResult: result },
webhookUrl: 'https://your-app.com/api/pipeline/on-step-done',
retries: 2,
});
await updateDatabase(pipelineId, { step: nextStep });
} else {
await updateDatabase(pipelineId, { status: 'completed' });
await notifyUser(pipelineId, 'Onboarding complete.');
}
res.status(200).json({ received: true });
});
Best for:
- User onboarding (verify → credit check → provision)
- Order fulfillment (validate → charge → ship → notify)
- Data pipelines (extract → transform → load → validate)
- Document processing (parse → enrich → store → index)
Trade-off: Harder to debug since state spans multiple tasks. Log the pipelineId and step number in every callback so you can trace the full execution path.
Step 5: Choose the Right Pattern
| Pattern | Complexity | When to Use |
|---|---|---|
| Fire-and-forget | Low | You don’t need the result |
| Request-reply | Medium | User needs the result, but not instantly |
| Fan-out/fan-in | High | Work can be split into independent parallel chunks |
| Task chaining | High | Steps must run in order, each depends on the last |
Decision flowchart
- Do you need the result? No → Fire-and-forget
- Can the work be parallelized? Yes → Fan-out/fan-in
- Are there multiple dependent steps? Yes → Task chaining
- Otherwise → Request-reply
You can also combine patterns. A pipeline step might fan out parallel tasks before advancing to the next stage. Start simple and add complexity only when the workload demands it.