Backpressure: When Consumers Can’t Keep Up
Producer sends 1000 messages/second. Consumer processes 100 messages/second. Queue grows. Memory fills. System crashes.
This is the backpressure problem.
The Problem#
Fast producer, slow consumer. Messages pile up in between. Eventually you run out of memory or disk.
Real scenario: Kafka producer sending click events at 10,000/sec. Consumer writing to database at 1,000/sec. Lag grows to millions of messages. Consumer falls hours behind real-time.
You need to slow down the producer or speed up the consumer. Or reject work.
Three Strategies#
1. Reject Work (Fail Fast)#
Queue is full? Reject new messages. Let producer handle failure.
public class BoundedQueue<T> {
private final BlockingQueue<T> queue;
private final int capacity;
public BoundedQueue(int capacity) {
this.capacity = capacity;
this.queue = new LinkedBlockingQueue<>(capacity);
}
public boolean offer(T item) {
boolean added = queue.offer(item);
if (!added) {
// Queue full, reject
throw new QueueFullException("Cannot accept new items");
}
return true;
}
public T take() throws InterruptedException {
return queue.take();
}
}
Producer gets immediate feedback. Can retry later, drop message, or alert.
Use when: Losing some data is acceptable (metrics, logs) or producer can handle rejection gracefully.
(retry, drop, alert) C->>Q: Consume Q-->>C: Message 1 Note over Q: Space available now
Bounded queue rejects when full. Producer must handle rejection.
2. Block Producer (Apply Backpressure)#
Queue is full? Block producer until space available. Producer slows down to consumer’s pace.
public class BlockingBackpressure<T> {
private final BlockingQueue<T> queue;
public BlockingBackpressure(int capacity) {
this.queue = new LinkedBlockingQueue<>(capacity);
}
public void put(T item) throws InterruptedException {
// Blocks if queue is full
queue.put(item);
}
public T take() throws InterruptedException {
return queue.take();
}
}
Producer thread blocks until consumer catches up. Natural flow control.
Problem: Blocks producer thread. If producer is handling HTTP requests, now those requests hang. Not acceptable for user-facing services.
Use when: Producer and consumer are both internal systems, blocking is acceptable.
3. Rate Limiting (Smooth Backpressure)#
Limit producer rate to match consumer capacity. Instead of hard blocking, apply gradual slowdown.
public class RateLimitedProducer {
private final RateLimiter rateLimiter;
private final BlockingQueue<Message> queue;
public RateLimitedProducer(double messagesPerSecond, int queueCapacity) {
this.rateLimiter = RateLimiter.create(messagesPerSecond);
this.queue = new LinkedBlockingQueue<>(queueCapacity);
}
public void produce(Message message) throws InterruptedException {
// Acquire permit before producing
rateLimiter.acquire();
// Try to add to queue
if (!queue.offer(message, 100, TimeUnit.MILLISECONDS)) {
// Reduce rate if queue is building up
rateLimiter.setRate(rateLimiter.getRate() * 0.9);
}
}
}
Adaptive rate limiting: Monitor queue depth. Growing? Reduce producer rate. Shrinking? Increase rate.
public class AdaptiveRateLimiter {
private final BlockingQueue<?> queue;
private final RateLimiter rateLimiter;
private final int targetQueueSize;
@Scheduled(fixedRate = 1000) // Check every second
public void adjustRate() {
int currentSize = queue.size();
double currentRate = rateLimiter.getRate();
if (currentSize > targetQueueSize * 1.5) {
// Queue growing, slow down
rateLimiter.setRate(currentRate * 0.8);
} else if (currentSize < targetQueueSize * 0.5) {
// Queue shrinking, speed up
rateLimiter.setRate(currentRate * 1.2);
}
}
}
Use when: Need smooth throttling, can’t afford hard rejections, have control over producer rate.
Reactive Streams (RxJava, Project Reactor)#
Built-in backpressure support. Consumer tells producer how many items it can handle.
Flux.range(1, 1000)
.onBackpressureBuffer(100) // Buffer 100 items
.subscribe(new BaseSubscriber<Integer>() {
@Override
protected void hookOnSubscribe(Subscription subscription) {
request(10); // Request 10 items initially
}
@Override
protected void hookOnNext(Integer value) {
processSlowly(value);
request(1); // Request next item after processing
}
});
Consumer controls flow. Producer only sends what consumer requests.
What I’ve Dealt With#
Event processing pipeline at one of the companies I worked with. Producer generating events from API traffic (thousands/sec). Consumer writing to analytics database (hundreds/sec).
Started with unbounded queue. Queue grew to millions. OOM crashes. Added bounded queue with rejection. Lost data during spikes. Not acceptable.
Final solution: adaptive rate limiting with queue monitoring. Producer rate adjusted based on queue depth. Maintained 80% queue capacity. No data loss, no crashes. System self-regulated.
Backpressure isn’t optional in producer-consumer systems. Plan for it or watch your system crash.
How do you handle slow consumers?