Reading from cache is easy. Writing is where it gets complicated.

Three strategies, each with a different answer to the question: when does the cache get updated relative to the database?

Write-through updates the cache and the database synchronously on every write. The cache is always consistent with the DB. The downside is that every write pays double the cost: serialize the object, write to cache, write to DB, all in the same request path. Read performance is great because the cache is always warm. Write throughput suffers.

Write-around skips the cache on writes entirely. Data goes straight to the database. The cache is only populated on subsequent reads (cache miss triggers a fill). Good for bulk or infrequent writes where you don’t want to cache data that won’t be read soon. Stale cache entries expire naturally via TTL.

Write-back (also called write-behind) writes to the cache first and flushes to the database asynchronously. Excellent write throughput. The risk: if the cache node dies before the flush, you lose those writes. Important to pair this with Idempotency guarantees on the async flush so retries don’t create duplicate records.

graph TD A[Write Request] --> B{Strategy} B -->|Write-Through| C[Update Cache] C --> D[Update DB synchronously] B -->|Write-Around| E[Skip Cache] E --> F[Update DB directly] F --> G[Cache filled on next read miss] B -->|Write-Back| H[Update Cache only] H --> I[Async flush to DB] I --> J{Flush success?} J -->|No| K[Retry with idempotency key] J -->|Yes| L[Done] style A fill:#000000,stroke:#00ff00,stroke-width:2px,color:#fff style B fill:#000000,stroke:#00ff00,stroke-width:2px,color:#fff style C fill:#000000,stroke:#00ff00,stroke-width:2px,color:#fff style D fill:#000000,stroke:#00ff00,stroke-width:2px,color:#fff style E fill:#000000,stroke:#00ff00,stroke-width:2px,color:#fff style F fill:#000000,stroke:#00ff00,stroke-width:2px,color:#fff style G fill:#000000,stroke:#00ff00,stroke-width:2px,color:#fff style H fill:#000000,stroke:#00ff00,stroke-width:2px,color:#fff style I fill:#000000,stroke:#00ff00,stroke-width:2px,color:#fff style J fill:#000000,stroke:#00ff00,stroke-width:2px,color:#fff style K fill:#000000,stroke:#ff0000,stroke-width:2px,color:#fff style L fill:#000000,stroke:#00ff00,stroke-width:2px,color:#fff

At Oracle, notification preference objects were stored with write-through. Each preference update triggered a full object re-serialization and wrote both to Redis and MySQL in the same transaction. During a migration that bulk-updated preferences for a large customer segment, write throughput dropped 60%. The cache was doing a lot of work for objects that weren’t going to be read again during the migration window anyway. Switching to write-around with a 30-second TTL meant the bulk writes hit only MySQL. Throughput recovered immediately, and users reading their preferences shortly after got a warm cache fill on first access.

Write-through is the right default for read-heavy workloads. Write-around is underrated for bulk operations.

What I’m Learning#

Write-back’s async flush is appealing for throughput but I’ve always been nervous about the data loss window. Have you run write-back in production? How did you handle the durability risk?