Column-Family Storage

Your query is always “give me all events for user X, sorted by time.” A row-oriented database gives you rows where you pay for every column you didn’t ask for. Wide-column stores flip the model: you design the schema around your query, not the other way around. How It Works In a wide-column store like Cassandra or HBase, the primary key has two parts: the partition key and the clustering key.
[Read more]

Blue-Green Deployments

Deploy the new version. Test it. Switch traffic. If something breaks, switch back. Instant rollback. Sounds ideal. The database migrations are where it gets complicated. The Pattern Blue-green runs two identical production environments. Blue is live. Green is idle. You deploy your new version to green. You test it against real infrastructure but with no live traffic. When you’re confident, you flip the load balancer to point to green. Green is now live.
[Read more]

Canary Releases

CI passed. Staging tests passed. You’ve reviewed the code three times. Then you ship to production and something you never predicted breaks at scale. What Canary Means A canary release sends a small fraction of real traffic to the new version before switching everyone over. 1% of users hit v2, 99% hit v1. You watch your metrics. If v2 behaves well, you expand: 5%, then 20%, then 100%. If metrics degrade, you route that 1% back to v1 and investigate without anyone else affected.
[Read more]

Feature Flags

You ship a feature. Three minutes later, on-call pings you: error rate spiked. You need to roll back. A full redeploy takes 20 minutes. With a feature flag, rollback takes 30 seconds. What a Flag Is A feature flag is a conditional in your code. If the flag is on, the new code path runs. If it’s off, the old behavior runs. The flag is a config value read at runtime, not at deploy time.
[Read more]

The Sidecar Pattern and Service Mesh

Every team writes the same retry logic. The same circuit breaker boilerplate. The same mTLS handshake setup. The platform team changes the retry policy and now has to update 30 services. There’s a better way. The Sidecar Pattern A sidecar is a separate process running in the same pod as your service. It intercepts all network traffic in and out. Your service code is unchanged. The sidecar handles retries, timeouts, circuit breaking, load balancing, and observability.
[Read more]

Service Discovery

Your service starts. It gets an IP. Three days later it restarts and gets a different IP. Every service that had the old IP hardcoded is now broken. This is why you need service discovery. The Problem With Static Config In a small system, hardcoding IPs in config files works. Then you move to containers. Containers restart, scale up, scale down. IPs change constantly. You need a way for services to find each other without knowing addresses in advance.
[Read more]

API Gateway Patterns

You have 12 microservices. Every mobile client talks to all 12. Each service handles its own auth, its own rate limiting, its own CORS. Adding a 13th service means updating every client app. The gateway pattern fixes that. What a Gateway Does An API gateway sits between clients and your services. Clients make one call. The gateway routes it, authenticates the caller, applies rate limits, then proxies to the right service.
[Read more]

Feature Stores

You train a model using yesterday’s data. You serve it using today’s data. The feature computation logic is slightly different between the two. The model degrades silently and you spend a week figuring out why. The Training-Serving Skew Problem ML models are trained on offline batches: historical data, features computed via Spark jobs, labels aggregated over time. At serving time, features are computed online: live data, lower latency budget, different code path.
[Read more]

Embedding Vectors and ANN Search

“Find the 10 most similar items to this one” sounds simple. With millions of items represented as 256-dimensional vectors, exact search is too slow to be useful in production. What Embeddings Are An ML model maps an item (a product, a document, a user’s history) to a dense numeric vector. The geometry of that vector space encodes semantic similarity: similar items land close together. You train the model on interaction data and the embeddings learn to represent “things that users treat similarly.
[Read more]

Collaborative Filtering

You don’t know what a user wants. But you know what people like them have wanted. That’s the intuition behind collaborative filtering. The Two Approaches User-based CF finds users similar to you, then recommends what they liked. Item-based CF finds items similar to what you’ve already liked. Item-based is generally more stable because user behavior shifts rapidly (you might buy a couch once), while item similarity changes slowly (a couch is similar to other furniture regardless of who buys it).
[Read more]

Token Revocation and Blacklisting

You log out. Your JWT is still valid. The server has no record it was ever issued. This is the stateless token revocation problem. Why Revocation Is Hard JWTs are stateless by design. The server validates a token by checking the signature and expiry. It doesn’t consult a database. This is what makes them fast and scalable. But it means there’s no central list of “valid tokens” to update when a token should no longer be accepted.
[Read more]

OAuth 2.0 Authorization Flows

OAuth 2.0 is not an authentication protocol. It’s an authorization protocol. That confusion is the root of most OAuth misuse. What OAuth Actually Does OAuth lets a user grant a third-party application limited access to their account without sharing their password. The user sees a consent screen listing what the app wants to access. They approve. The app gets a token with exactly those permissions. Your password never leaves the authorization server.
[Read more]

JWT and Token-Based Auth

The server doesn’t remember you. Every request carries proof of who you are. That’s the point of a token. The Structure A JWT is three base64url-encoded segments joined by dots: header, payload, signature. The header says which algorithm signed it. The payload carries claims: user ID, roles, expiry time. The signature is a cryptographic proof that the header and payload haven’t been tampered with. The server doesn’t need a database lookup to verify a JWT.
[Read more]

Sequenced Writes

Two events arrive out of order. You don’t know they’re out of order. You process them anyway. The system ends up in a state that never should have existed. Sequence Numbers as the Foundation A global sequence number assigned to every write event is the most direct solution to ordering problems. Event 1, event 2, event 3. If event 4 arrives after event 6, you know something is missing. You wait, or request a replay, rather than blindly processing forward.
[Read more]

Market Data Distribution

Every trade generates a tick: a price, a volume, a timestamp. An active stock might generate thousands of ticks per second. Distributing that data to thousands of subscribers simultaneously is its own problem. What Tick Data Looks Like A tick is small: instrument ID, price, quantity, timestamp. The volume is the problem. During market open or a news event, tick rates spike dramatically. Subscribers range from high-frequency algorithms (latency-sensitive, need every tick) to dashboards (showing “current price,” don’t care about ticks they missed).
[Read more]

Order Matching Engine

A stock exchange doesn’t just record trades. It runs an algorithm that decides which buyer gets matched with which seller. That algorithm is the matching engine, and its design choices are unusually interesting. The Limit Order Book The core data structure is the limit order book (LOB): two sorted collections of orders, bids (buy orders) and asks (sell orders). Bids are sorted by price descending (highest buyer first), asks by price ascending (lowest seller first).
[Read more]

Delivery Receipts and Read Tracking

“Sent” is not “delivered.” “Delivered” is not “opened.” These are three different states and conflating them causes subtle bugs in badge counts and notification UIs. The Delivery Gap APNs and FCM give you delivery confirmation at the gateway level, not the device level. You know the gateway accepted your payload. You don’t know if the device received it, displayed it, or was offline when it arrived. For most notifications this is fine.
[Read more]

Notification Deduplication

Your retry logic fires. The user gets the same notification twice. They think your app is broken. They’re not wrong. The Problem with Retries Push delivery is at-least-once by design. Your server sends to APNs/FCM, the network hiccups, you don’t get a response, so you retry. APNs might have delivered the first one. The user now sees two identical alerts. The fix lives at two levels: your server and the gateway.
[Read more]

Push Notification Delivery

You don’t send a push notification directly to a phone. You send it to Apple or Google, and they deliver it for you. That indirection has consequences most backend engineers don’t think about until something breaks. APNs and FCM Apple Push Notification Service (APNs) handles iOS. Firebase Cloud Messaging (FCM) handles Android (and can handle iOS too). Your server maintains a persistent HTTP/2 connection to these gateways and submits payloads. The gateway handles the actual delivery to the device, retries if the device is offline, and tells you when a token is no longer valid.
[Read more]

Storage Tiering

Most of your data is accessed once and then never again. Storing it on fast, expensive storage forever is just burning money. Hot, Warm, Cold The canonical model is three tiers based on access frequency. Hot storage (SSD-backed, high IOPS) handles recent data that’s accessed constantly. Warm storage (standard HDD or S3 Standard-IA) holds data accessed occasionally. Cold storage (archival, like Glacier) holds data that might never be touched again but legally must be retained.
[Read more]

Delta Sync

You save a 200 MB file. One word changed. Re-uploading 200 MB to sync that change is absurd. Delta sync is how you avoid it. The Core Idea Split the file into blocks. On an update, compare the new version’s blocks against the stored version’s blocks. Transfer only the blocks that changed. Rsync pioneered this. It computes a fast rolling checksum for each block on the remote side, sends those checksums to the client, the client finds which local blocks match and which don’t, and transmits only the mismatches.
[Read more]

Content-Addressable Storage

Two users upload the same 50 MB file. Naive storage keeps two copies. Content-addressable storage keeps one. What “Content-Addressable” Means Instead of locating data by where it lives (a path, a filename), you locate it by what it is. Hash the content, use the hash as the key. Same content, same hash, same storage location. SHA-256 a file and store the result as its address. The practical consequence: deduplication becomes automatic.
[Read more]

Offline-First Sync

The field rep drove into a dead zone. The mobile app kept working: they filled out three forms, updated two account records, closed a deal. Forty minutes later, connectivity returned and the sync ran. Two of those records had been updated by a desktop user in the meantime. The mobile changes were silently dropped. No error. No prompt. Just gone. The Core Problem The client operates against a local snapshot while offline.
[Read more]

Revision History and Snapshotting

A user hits Ctrl+Z forty times and expects to land exactly where they were yesterday. That is not just undo. That is a complete audit trail of every edit, stored efficiently, queryable at any point in time. The naive approach: store a full copy of the document after every change. Works for ten users. Collapses at ten thousand. Deltas, Not Copies Instead of storing full document state after every edit, store only what changed: the operation (insert 3 chars at position 12, delete 5 chars at position 20).
[Read more]

Operational Transformation

Two users edit the same document simultaneously. User A inserts “X” at position 5. User B deletes the character at position 3. Apply both naively and the result is corrupted. The positions shifted when B’s deletion ran first, and A’s insertion lands in the wrong place. The Position Problem Operations encode positions at generation time, not application time. When document state changes between generation and application, positions are stale. Operational Transformation (OT) transforms an incoming op relative to already-applied ops before executing it.
[Read more]

Lambda and Kappa Architecture

Real-time results are fast and approximate. Historical results are slow and accurate. The tension between them is where Lambda and Kappa architecture come from. Lambda: Two Pipelines Lambda runs two parallel systems. The batch layer processes all historical data on a schedule (Spark on HDFS, every few hours) and produces ground truth. The speed layer processes the live stream (Kafka Streams or Flink) for low-latency results. The serving layer merges both: “latest batch result plus stream delta since the last batch.
[Read more]

Watermarks and Late-Arriving Data

There are two clocks in any stream processing system. Event time: when the click actually happened, recorded in the payload. Processing time: when your system received it. On a healthy network they’re close. In reality they’re not. Mobile clients buffer events when offline. Retries add delay. A click at 10:00:05 might reach your processor at 10:00:47. The 10:00 window has long since closed. The Problem With Never Waiting If you never close a window, you never produce output.
[Read more]

Stream Processing Windows

Aggregating over an infinite stream sounds easy until you realize you have no idea when it ends. You need to cut it into chunks. That’s what windows are. Three Window Types Tumbling windows are fixed, non-overlapping buckets. “Clicks per minute” is a tumbling window: minute 1, minute 2, minute 3, no overlap. Simple to implement, but events that span the boundary get split across buckets. Sliding windows overlap. “Average clicks in the last 5 minutes, recomputed every minute” means each event can appear in up to 5 windows.
[Read more]

ZooKeeper Ephemeral Nodes

Redis locks expire after a TTL. If your process crashes, you wait up to 30 seconds for the lock to become available. ZooKeeper takes a different approach: lock it to the session, not a timer. Ephemeral Nodes ZooKeeper has two kinds of nodes: persistent (survive until explicitly deleted) and ephemeral (automatically deleted when the client session expires). A session is kept alive by a heartbeat. If the client crashes, heartbeats stop, the session expires after a configurable timeout, and the ephemeral node vanishes.
[Read more]

The Redlock Algorithm

A single Redis instance holds your lock. Redis crashes. The lock entry is gone. But your client already received “acquired” before the crash and is happily running. Another client acquires the same lock on the recovered instance. Two lock holders. The single-instance Redis lock has a fundamental flaw. Quorum Locking Redlock is Redis creator Antirez’s answer. Instead of one Redis, use N independent instances (typically 5). To acquire the lock:
[Read more]