Posts for: #Redis

The Redlock Algorithm

A single Redis instance holds your lock. Redis crashes. The lock entry is gone. But your client already received “acquired” before the crash and is happily running. Another client acquires the same lock on the recovered instance. Two lock holders. The single-instance Redis lock has a fundamental flaw. Quorum Locking Redlock is Redis creator Antirez’s answer. Instead of one Redis, use N independent instances (typically 5). To acquire the lock:
[Read more]

Redis Distributed Locks

Two services start the same batch job at the same time. Both read the same data, both process it, both write conflicting results. Your database row lock didn’t help because the services are on different JVMs. This is the distributed lock problem. Why Database Locks Don’t Work Here A SELECT FOR UPDATE on a MySQL row holds a lock only for the lifetime of that connection. Cross-service, that’s useless. You’d need a shared coordination point, something every instance can talk to.
[Read more]

Cache Write Strategies

Reading from cache is easy. Writing is where it gets complicated. Three strategies, each with a different answer to the question: when does the cache get updated relative to the database? Write-through updates the cache and the database synchronously on every write. The cache is always consistent with the DB. The downside is that every write pays double the cost: serialize the object, write to cache, write to DB, all in the same request path.
[Read more]

Hot Key Detection and Mitigation

Redis is single-threaded per instance. One key receiving 50,000 reads per second will pin a single CPU core and nothing else on that shard gets processed fast. This is the hot key problem. Unlike a database where you might add replicas or indexes, a single Redis key is owned by a single shard. Traffic concentration on that key concentrates CPU on that node. Detection is straightforward: redis-cli --hotkeys scans keyspace and reports access frequency.
[Read more]

Cache Eviction Policies

Cache fills up. Something has to go. The question is: which thing? LRU (Least Recently Used) evicts whatever was accessed longest ago. Simple, intuitive, fast to implement with a doubly-linked list and hash map. LFU (Least Frequently Used) evicts whatever was accessed least often. More accurate in theory, more expensive in practice. The LFU decay problem tripped me up: new items start with zero frequency. A fresh key that’s about to become hot looks identical to a stale key nobody cares about.
[Read more]