Caching Patterns: Cache-Aside, Write-Through, and Friends
Your database is slow. You add a cache. Problem solved?
Not quite. Now you have two copies of the same data. When do you update the cache? When do you update the database? What if they disagree?
These questions led to four patterns. Each makes different trade-offs.
Cache-Aside (Lazy Loading)#
The most common pattern. Application talks to cache and database separately.
Read path:
- Check cache
- If hit, return data
- If miss, read from database
- Store in cache
- Return data
Write path:
- Write to database
- Invalidate cache (delete the key)
Why invalidate instead of update? Simpler. Updating cache on every write is wasteful if that data isn’t read often. Let the next read populate it.
Trade-offs:
- Cache miss on first request (cold start)
- Brief window of inconsistency between invalidate and next read
- Application manages both cache and DB logic
This is what I use 90% of the time. It’s simple and gives you control.
Read-Through#
Similar to cache-aside, but the cache itself fetches from the database on miss.
Read path:
- Application asks cache for data
- Cache checks itself
- If miss, cache reads from database
- Cache stores the data
- Cache returns to application
The application only talks to the cache. The cache handles DB reads.
Trade-offs:
- Cleaner application code
- Cache needs to know how to query your database
- Less flexibility in what gets cached
Works well with cache providers that support it natively. Your code just calls cache.get(key) and the cache figures it out.
Write-Through#
Every write goes to cache first, then cache writes to database. Synchronously.
Write path:
- Application writes to cache
- Cache writes to database (same transaction)
- Return success only when both complete
Trade-offs:
- Cache is always consistent with DB
- Higher write latency (two writes, one synchronous)
- Every write updates cache, even if data is rarely read
Good for read-heavy workloads where consistency matters and you can tolerate slower writes.
Write-Behind (Write-Back)#
Write to cache immediately. Database update happens later, asynchronously.
Write path:
- Application writes to cache
- Return success immediately
- Cache queues the write
- Background process flushes to database
Trade-offs:
- Fastest writes (just cache, return immediately)
- Risk of data loss if cache dies before flush
- Complex failure handling
- Database might be stale for a while
I’ve seen this used for analytics and logging where speed matters more than durability. For user data? Too risky for my taste.
Which One to Use?#
| Pattern | Best For |
|---|---|
| Cache-Aside | Most use cases. Simple, flexible. |
| Read-Through | When you want cleaner app code. |
| Write-Through | Read-heavy, consistency matters. |
| Write-Behind | Write-heavy, can tolerate loss. |
At Oracle, we used cache-aside for most services. The control was worth the extra code. When query times dropped from 200ms to 3ms, nobody complained about the pattern being “manual.”
What I’m Learning#
Caching seems simple until you realize it’s a distributed systems problem in disguise. Two copies of data. Two sources of truth. The same consistency trade-offs we’ve seen all month.
The pattern you choose determines your failure modes. Cache-aside fails open (miss goes to DB). Write-behind can lose data. Write-through is slow but safe.
Pick based on what failure you can tolerate, not what seems cleanest.
What caching pattern does your team use? Did you choose it deliberately or inherit it?