CQRS: Separating Reads from Writes
Same model for reads and writes. Works fine until it doesn’t.
Writes need normalized schema. Reads need denormalized, fast queries. One model can’t optimize both.
CQRS splits them.
What is CQRS#
Command Query Responsibility Segregation. Fancy name for: separate write model from read model.
Traditional approach:
class UserService {
public void updateUser(User user) {
userRepository.save(user); // Write
}
public User getUser(Long id) {
return userRepository.findById(id); // Read
}
}
Same User entity, same database schema for both.
CQRS:
class UserCommandService {
public void updateUser(UpdateUserCommand cmd) {
// Write to normalized schema
userWriteRepository.save(cmd);
}
}
class UserQueryService {
public UserDTO getUser(Long id) {
// Read from denormalized view
return userReadRepository.findById(id);
}
}
Different models, potentially different databases.
Why Separate#
Write requirements:
- Normalized schema (avoid duplication)
- Strong consistency
- Validation, business rules
- Transactional integrity
Read requirements:
- Denormalized (no JOINs)
- Fast queries
- Eventual consistency acceptable
- Aggregated data, computed values
Single model can’t optimize both. Normalization helps writes, hurts reads. Denormalization helps reads, hurts writes.
Create, Update, Delete] C --> WM[Write Model
Normalized Schema] WM --> WDB[(Write Database
PostgreSQL)] WDB -.CDC/Events.-> Sync[Synchronization] Sync --> RDB[(Read Database
Elasticsearch)] RDB --> RM[Read Model
Denormalized Views] RM --> Q[Queries
Search, Aggregations] style C fill:#000000,stroke:#00ff00,stroke-width:2px,color:#fff style WM fill:#000000,stroke:#00ff00,stroke-width:2px,color:#fff style WDB fill:#000000,stroke:#00ff00,stroke-width:2px,color:#fff style Sync fill:#000000,stroke:#00ff00,stroke-width:2px,color:#fff style RDB fill:#000000,stroke:#00ff00,stroke-width:2px,color:#fff style RM fill:#000000,stroke:#00ff00,stroke-width:2px,color:#fff style Q fill:#000000,stroke:#00ff00,stroke-width:2px,color:#fff
CQRS: Commands write to normalized model, queries read from denormalized views.
Implementation Patterns#
1. Same Database, Different Models#
Write and read from same database, but different tables/schemas.
Write side:
-- Normalized
CREATE TABLE users (id, name, email);
CREATE TABLE orders (id, user_id, amount);
CREATE TABLE order_items (id, order_id, product_id, quantity);
Read side:
-- Denormalized view for fast queries
CREATE TABLE order_summary (
order_id,
user_name,
user_email,
total_amount,
item_count,
product_names -- comma-separated
);
Triggers or application code keeps read model in sync.
2. Separate Databases#
Write to PostgreSQL (transactional). Read from Elasticsearch (fast search).
CDC or events sync them.
@Service
class OrderCommandService {
@Autowired PostgresOrderRepository writeRepo;
@Autowired EventPublisher events;
@Transactional
public void createOrder(CreateOrderCommand cmd) {
Order order = new Order(cmd);
writeRepo.save(order);
events.publish(new OrderCreatedEvent(order));
}
}
@Service
class OrderQueryService {
@Autowired ElasticsearchOrderRepository readRepo;
public List<OrderDTO> searchOrders(String query) {
return readRepo.search(query);
}
}
@EventListener
class OrderReadModelUpdater {
@Autowired ElasticsearchOrderRepository readRepo;
public void on(OrderCreatedEvent event) {
// Update read model
OrderDocument doc = toDocument(event);
readRepo.save(doc);
}
}
Write model handles commands. Events propagate to read model. Eventually consistent.
3. Event Sourcing + CQRS#
Store events as source of truth. Build read models by replaying events.
// Write side: append events
eventStore.append(new OrderCreatedEvent(orderId, userId, items));
// Read side: project events into views
class OrderSummaryProjection {
public void on(OrderCreatedEvent event) {
OrderSummary summary = new OrderSummary();
summary.orderId = event.orderId;
summary.totalAmount = calculateTotal(event.items);
summaryRepo.save(summary);
}
public void on(OrderCancelledEvent event) {
summaryRepo.deleteById(event.orderId);
}
}
Can rebuild read models anytime by replaying events.
The Complexity Trade-off#
CQRS adds:
- Two models to maintain
- Synchronization logic
- Eventual consistency (read model lags behind writes)
- More infrastructure (if using separate databases)
CQRS solves:
- Read/write contention (different databases)
- Complex queries (denormalized views optimized for queries)
- Scalability (scale reads and writes independently)
When to Use CQRS#
Good use cases:
- High read/write ratio (different optimization needs)
- Complex reporting/analytics (denormalized views help)
- Separate scaling needs (reads » writes or vice versa)
- Event-driven architecture (events already flowing)
Overkill for:
- Simple CRUD apps
- Low traffic systems
- When eventual consistency is unacceptable
- Small teams (maintenance burden)
What I’m Thinking#
Haven’t implemented full CQRS in production. Seen related patterns: read replicas for analytics (separate read database), materialized views for complex queries (separate read model).
The appeal: optimize reads and writes separately. The concern: operational complexity. Two models means two things to maintain, debug, and keep in sync.
For systems with complex read requirements (search, aggregations, analytics) that differ significantly from write requirements (transactional, normalized), CQRS makes sense. For typical CRUD, it’s over-engineering.
The key question: Is the complexity cost worth the performance benefit? Depends on your scale and requirements.
Have you used CQRS? When did it pay off?