Posts for: #Architecture

Sequenced Writes

Two events arrive out of order. You don’t know they’re out of order. You process them anyway. The system ends up in a state that never should have existed. Sequence Numbers as the Foundation A global sequence number assigned to every write event is the most direct solution to ordering problems. Event 1, event 2, event 3. If event 4 arrives after event 6, you know something is missing. You wait, or request a replay, rather than blindly processing forward.
[Read more]

Market Data Distribution

Every trade generates a tick: a price, a volume, a timestamp. An active stock might generate thousands of ticks per second. Distributing that data to thousands of subscribers simultaneously is its own problem. What Tick Data Looks Like A tick is small: instrument ID, price, quantity, timestamp. The volume is the problem. During market open or a news event, tick rates spike dramatically. Subscribers range from high-frequency algorithms (latency-sensitive, need every tick) to dashboards (showing “current price,” don’t care about ticks they missed).
[Read more]

Order Matching Engine

A stock exchange doesn’t just record trades. It runs an algorithm that decides which buyer gets matched with which seller. That algorithm is the matching engine, and its design choices are unusually interesting. The Limit Order Book The core data structure is the limit order book (LOB): two sorted collections of orders, bids (buy orders) and asks (sell orders). Bids are sorted by price descending (highest buyer first), asks by price ascending (lowest seller first).
[Read more]

Storage Tiering

Most of your data is accessed once and then never again. Storing it on fast, expensive storage forever is just burning money. Hot, Warm, Cold The canonical model is three tiers based on access frequency. Hot storage (SSD-backed, high IOPS) handles recent data that’s accessed constantly. Warm storage (standard HDD or S3 Standard-IA) holds data accessed occasionally. Cold storage (archival, like Glacier) holds data that might never be touched again but legally must be retained.
[Read more]

Content-Addressable Storage

Two users upload the same 50 MB file. Naive storage keeps two copies. Content-addressable storage keeps one. What “Content-Addressable” Means Instead of locating data by where it lives (a path, a filename), you locate it by what it is. Hash the content, use the hash as the key. Same content, same hash, same storage location. SHA-256 a file and store the result as its address. The practical consequence: deduplication becomes automatic.
[Read more]

Offline-First Sync

The field rep drove into a dead zone. The mobile app kept working: they filled out three forms, updated two account records, closed a deal. Forty minutes later, connectivity returned and the sync ran. Two of those records had been updated by a desktop user in the meantime. The mobile changes were silently dropped. No error. No prompt. Just gone. The Core Problem The client operates against a local snapshot while offline.
[Read more]

Revision History and Snapshotting

A user hits Ctrl+Z forty times and expects to land exactly where they were yesterday. That is not just undo. That is a complete audit trail of every edit, stored efficiently, queryable at any point in time. The naive approach: store a full copy of the document after every change. Works for ten users. Collapses at ten thousand. Deltas, Not Copies Instead of storing full document state after every edit, store only what changed: the operation (insert 3 chars at position 12, delete 5 chars at position 20).
[Read more]

Operational Transformation

Two users edit the same document simultaneously. User A inserts “X” at position 5. User B deletes the character at position 3. Apply both naively and the result is corrupted. The positions shifted when B’s deletion ran first, and A’s insertion lands in the wrong place. The Position Problem Operations encode positions at generation time, not application time. When document state changes between generation and application, positions are stale. Operational Transformation (OT) transforms an incoming op relative to already-applied ops before executing it.
[Read more]

Lambda and Kappa Architecture

Real-time results are fast and approximate. Historical results are slow and accurate. The tension between them is where Lambda and Kappa architecture come from. Lambda: Two Pipelines Lambda runs two parallel systems. The batch layer processes all historical data on a schedule (Spark on HDFS, every few hours) and produces ground truth. The speed layer processes the live stream (Kafka Streams or Flink) for low-latency results. The serving layer merges both: “latest batch result plus stream delta since the last batch.
[Read more]

ZooKeeper Ephemeral Nodes

Redis locks expire after a TTL. If your process crashes, you wait up to 30 seconds for the lock to become available. ZooKeeper takes a different approach: lock it to the session, not a timer. Ephemeral Nodes ZooKeeper has two kinds of nodes: persistent (survive until explicitly deleted) and ephemeral (automatically deleted when the client session expires). A session is kept alive by a heartbeat. If the client crashes, heartbeats stop, the session expires after a configurable timeout, and the ephemeral node vanishes.
[Read more]

The Redlock Algorithm

A single Redis instance holds your lock. Redis crashes. The lock entry is gone. But your client already received “acquired” before the crash and is happily running. Another client acquires the same lock on the recovered instance. Two lock holders. The single-instance Redis lock has a fundamental flaw. Quorum Locking Redlock is Redis creator Antirez’s answer. Instead of one Redis, use N independent instances (typically 5). To acquire the lock:
[Read more]