Posts for: #Caching

Cache Write Strategies

Reading from cache is easy. Writing is where it gets complicated. Three strategies, each with a different answer to the question: when does the cache get updated relative to the database? Write-through updates the cache and the database synchronously on every write. The cache is always consistent with the DB. The downside is that every write pays double the cost: serialize the object, write to cache, write to DB, all in the same request path.
[Read more]

Hot Key Detection and Mitigation

Redis is single-threaded per instance. One key receiving 50,000 reads per second will pin a single CPU core and nothing else on that shard gets processed fast. This is the hot key problem. Unlike a database where you might add replicas or indexes, a single Redis key is owned by a single shard. Traffic concentration on that key concentrates CPU on that node. Detection is straightforward: redis-cli --hotkeys scans keyspace and reports access frequency.
[Read more]

Cache Eviction Policies

Cache fills up. Something has to go. The question is: which thing? LRU (Least Recently Used) evicts whatever was accessed longest ago. Simple, intuitive, fast to implement with a doubly-linked list and hash map. LFU (Least Frequently Used) evicts whatever was accessed least often. More accurate in theory, more expensive in practice. The LFU decay problem tripped me up: new items start with zero frequency. A fresh key that’s about to become hot looks identical to a stale key nobody cares about.
[Read more]