Most of your data is accessed once and then never again. Storing it on fast, expensive storage forever is just burning money.
Hot, Warm, Cold The canonical model is three tiers based on access frequency. Hot storage (SSD-backed, high IOPS) handles recent data that’s accessed constantly. Warm storage (standard HDD or S3 Standard-IA) holds data accessed occasionally. Cold storage (archival, like Glacier) holds data that might never be touched again but legally must be retained.
You save a 200 MB file. One word changed. Re-uploading 200 MB to sync that change is absurd. Delta sync is how you avoid it.
The Core Idea Split the file into blocks. On an update, compare the new version’s blocks against the stored version’s blocks. Transfer only the blocks that changed.
Rsync pioneered this. It computes a fast rolling checksum for each block on the remote side, sends those checksums to the client, the client finds which local blocks match and which don’t, and transmits only the mismatches.
Two users upload the same 50 MB file. Naive storage keeps two copies. Content-addressable storage keeps one.
What “Content-Addressable” Means Instead of locating data by where it lives (a path, a filename), you locate it by what it is. Hash the content, use the hash as the key. Same content, same hash, same storage location. SHA-256 a file and store the result as its address.
The practical consequence: deduplication becomes automatic.
The field rep drove into a dead zone. The mobile app kept working: they filled out three forms, updated two account records, closed a deal. Forty minutes later, connectivity returned and the sync ran. Two of those records had been updated by a desktop user in the meantime. The mobile changes were silently dropped. No error. No prompt. Just gone.
The Core Problem The client operates against a local snapshot while offline.
A user hits Ctrl+Z forty times and expects to land exactly where they were yesterday. That is not just undo. That is a complete audit trail of every edit, stored efficiently, queryable at any point in time. The naive approach: store a full copy of the document after every change. Works for ten users. Collapses at ten thousand.
Deltas, Not Copies Instead of storing full document state after every edit, store only what changed: the operation (insert 3 chars at position 12, delete 5 chars at position 20).
Two users edit the same document simultaneously. User A inserts “X” at position 5. User B deletes the character at position 3. Apply both naively and the result is corrupted. The positions shifted when B’s deletion ran first, and A’s insertion lands in the wrong place.
The Position Problem Operations encode positions at generation time, not application time. When document state changes between generation and application, positions are stale. Operational Transformation (OT) transforms an incoming op relative to already-applied ops before executing it.
Real-time results are fast and approximate. Historical results are slow and accurate. The tension between them is where Lambda and Kappa architecture come from.
Lambda: Two Pipelines Lambda runs two parallel systems. The batch layer processes all historical data on a schedule (Spark on HDFS, every few hours) and produces ground truth. The speed layer processes the live stream (Kafka Streams or Flink) for low-latency results. The serving layer merges both: “latest batch result plus stream delta since the last batch.
There are two clocks in any stream processing system. Event time: when the click actually happened, recorded in the payload. Processing time: when your system received it. On a healthy network they’re close. In reality they’re not.
Mobile clients buffer events when offline. Retries add delay. A click at 10:00:05 might reach your processor at 10:00:47. The 10:00 window has long since closed.
The Problem With Never Waiting If you never close a window, you never produce output.
Aggregating over an infinite stream sounds easy until you realize you have no idea when it ends. You need to cut it into chunks. That’s what windows are.
Three Window Types Tumbling windows are fixed, non-overlapping buckets. “Clicks per minute” is a tumbling window: minute 1, minute 2, minute 3, no overlap. Simple to implement, but events that span the boundary get split across buckets.
Sliding windows overlap. “Average clicks in the last 5 minutes, recomputed every minute” means each event can appear in up to 5 windows.
Redis locks expire after a TTL. If your process crashes, you wait up to 30 seconds for the lock to become available. ZooKeeper takes a different approach: lock it to the session, not a timer.
Ephemeral Nodes ZooKeeper has two kinds of nodes: persistent (survive until explicitly deleted) and ephemeral (automatically deleted when the client session expires). A session is kept alive by a heartbeat. If the client crashes, heartbeats stop, the session expires after a configurable timeout, and the ephemeral node vanishes.
A single Redis instance holds your lock. Redis crashes. The lock entry is gone. But your client already received “acquired” before the crash and is happily running. Another client acquires the same lock on the recovered instance. Two lock holders. The single-instance Redis lock has a fundamental flaw.
Quorum Locking Redlock is Redis creator Antirez’s answer. Instead of one Redis, use N independent instances (typically 5). To acquire the lock:
Two services start the same batch job at the same time. Both read the same data, both process it, both write conflicting results. Your database row lock didn’t help because the services are on different JVMs. This is the distributed lock problem.
Why Database Locks Don’t Work Here A SELECT FOR UPDATE on a MySQL row holds a lock only for the lifetime of that connection. Cross-service, that’s useless. You’d need a shared coordination point, something every instance can talk to.
Reading from cache is easy. Writing is where it gets complicated.
Three strategies, each with a different answer to the question: when does the cache get updated relative to the database?
Write-through updates the cache and the database synchronously on every write. The cache is always consistent with the DB. The downside is that every write pays double the cost: serialize the object, write to cache, write to DB, all in the same request path.
Redis is single-threaded per instance. One key receiving 50,000 reads per second will pin a single CPU core and nothing else on that shard gets processed fast.
This is the hot key problem. Unlike a database where you might add replicas or indexes, a single Redis key is owned by a single shard. Traffic concentration on that key concentrates CPU on that node.
Detection is straightforward: redis-cli --hotkeys scans keyspace and reports access frequency.
Cache fills up. Something has to go. The question is: which thing?
LRU (Least Recently Used) evicts whatever was accessed longest ago. Simple, intuitive, fast to implement with a doubly-linked list and hash map. LFU (Least Frequently Used) evicts whatever was accessed least often. More accurate in theory, more expensive in practice.
The LFU decay problem tripped me up: new items start with zero frequency. A fresh key that’s about to become hot looks identical to a stale key nobody cares about.
You write a record, immediately read it back, and assert equality. The test fails. Not because of a bug, but because the read hit a replica that hasn’t caught up yet. Your test is correct. Your assertion timing isn’t.
Team A changes their API response. Team B’s service breaks in production. The integration test suite passed because it was running against a mock from 3 months ago.
Your system passed all tests. Every health check is green. You’re confident it handles failures. Then a network partition happens in production and everything falls apart. You never actually tested failure.
You have 3 consumers reading from 6 Kafka partitions. One consumer crashes. The remaining 2 need to pick up its partitions. That handoff isn’t as smooth as you’d hope.
Your event log has 100 million records. Key ‘user-42’ has been updated 500 times. You only care about the latest value. But deleting old entries would break consumers who haven’t caught up yet.
Two database replicas should have identical data. One has 50 million rows. Comparing row by row would take hours. Merkle trees find the differences by comparing a single hash.
Three replicas, one write. How many replicas need to acknowledge before the write is ‘done’? One? All three? The answer determines your consistency guarantees.
Should your services push metrics to a collector, or should the collector pull metrics from your services? Sounds like a minor detail. It changes your entire monitoring architecture.
You’re storing metrics at 1-second granularity. After a year, that’s 31 million data points per metric. Nobody looks at second-level data from 6 months ago. But you still need the trends.
Your monitoring system ingests 100,000 metrics per second. Each is a timestamp, a name, and a value. A regular database buckles. Time-series databases are designed for exactly this shape of data.
User uploads one video file. Your system needs to produce 240p, 480p, 720p, and 1080p versions, each with multiple audio tracks. That’s a distributed workflow problem.
User starts watching in 1080p. They walk into an elevator. Bandwidth drops. The video freezes and buffers. Adaptive bitrate streaming would have dropped to 480p and kept playing.
Your origin server is in us-east-1. Your user is in Mumbai. That’s 200ms of latency before a single byte transfers. CDNs put your content on a server down the street.
User opens the app. Show the nearest 10 coffee shops. Sounds simple until you realize ’nearest’ means computing distance against millions of locations in under 100ms.
Manhattan has 50,000 restaurants. Rural Wyoming has 3 per county. A fixed-size grid wastes cells on empty space and overloads dense areas. Quadtrees adapt.