Backpressure
Backpressure is a flow control mechanism in systems that process streams of work. When a consumer (like a worker or API) can’t keep up with the rate of incoming tasks, backpressure signals the producer to slow down — preventing memory exhaustion, dropped messages, and cascading failures.
The Problem Without Backpressure
If producers generate work faster than consumers process it, the queue grows unbounded:
Producer: 1,000 tasks/sec → Queue: [growing...] → Worker: 100 tasks/sec
↑
Queue grows by 900/sec
Memory exhausted in minutes
Without backpressure, the system crashes once the queue fills available memory.
How Backpressure Works
Backpressure mechanisms limit how fast work enters the system:
- Bounded queues: Reject or block new tasks when the queue reaches a maximum size
- Rate limiting at ingress: Throttle producers to match consumer throughput
- Dynamic scaling: Spin up more workers when queue depth increases
- Load shedding: Drop low-priority work to protect high-priority tasks
Backpressure in Task Queues
Task queues like AsyncQueue handle backpressure naturally:
- Tasks are persisted to durable storage (Redis), not held in memory
- Workers pull tasks at their own pace — they’re never force-fed
- Queue depth is monitored and can trigger auto-scaling
- Rate limiting can cap how fast tasks are created
// Workers control their own concurrency
// If processing is slow, the queue buffers — no data is lost
await aq.tasks.create({
callbackUrl: 'https://your-app.com/api/process',
payload: data,
retries: 3,
});
The queue acts as a buffer between producers and consumers, absorbing traffic spikes without overloading either end.
Monitoring for Backpressure
Key metrics to watch:
- Queue depth: Number of pending tasks — growing means consumers are falling behind
- Processing latency: Time from task creation to completion — increasing means backpressure is building
- Worker utilization: If workers are at 100%, you need more capacity or need to throttle producers