Database Per Service: When Microservices Need Their Own Data
Learn when and how to implement the database-per-service pattern in microservices, covering data isolation, saga pattern, and eventual consistency.
Tags
Database Per Service: When Microservices Need Their Own Data
The database-per-service pattern means each microservice owns and manages its own database, and no other service can access that data store directly. This is one of the most consequential architectural decisions in microservices, because it determines how your services communicate, how you handle transactions, and how independently your teams can operate. When done right, it gives you genuine autonomy between services. When done wrong, it creates a distributed monolith that is worse than the monolith you started with.
TL;DR
Each microservice should own its data exclusively. This enables independent deployment, polyglot persistence, and team autonomy. The tradeoff is that cross-service queries become harder, distributed transactions require the saga pattern, and you must embrace eventual consistency. The shared database anti-pattern may feel simpler initially, but it creates tight coupling that defeats the purpose of microservices.
Why This Matters
Most teams adopt microservices for organizational reasons: they want independent teams that can deploy independently. But if five services share one PostgreSQL database with direct table access, you have not actually decoupled anything. A schema change in the orders table can break the shipping service, the billing service, and the analytics service simultaneously. Your "microservices" are a distributed monolith with network latency added for free.
The database-per-service pattern is the architectural boundary that makes microservices genuinely independent. Without it, you get all the operational complexity of distributed systems with none of the organizational benefits.
This matters particularly as organizations scale. When you have three developers, a shared database is manageable. When you have thirty developers across six teams, a shared database becomes a coordination bottleneck that slows everyone down. The database-per-service pattern trades technical complexity for organizational independence.
How It Works
Data Ownership Boundaries
Every microservice defines a clear boundary around its data. The Order Service owns the orders, order_items, and order_status_history tables. The Inventory Service owns products, stock_levels, and warehouses. No service reads from or writes to another service's tables.
// Order Service - owns its own database schema
// order-service/src/schema.ts
import { pgTable, uuid, decimal, timestamp, varchar } from 'drizzle-orm/pg-core';
export const orders = pgTable('orders', {
id: uuid('id').primaryKey().defaultRandom(),
customerId: uuid('customer_id').notNull(),
totalAmount: decimal('total_amount', { precision: 10, scale: 2 }).notNull(),
status: varchar('status', { length: 50 }).default('pending'),
createdAt: timestamp('created_at').defaultNow(),
});
export const orderItems = pgTable('order_items', {
id: uuid('id').primaryKey().defaultRandom(),
orderId: uuid('order_id').references(() => orders.id),
productId: uuid('product_id').notNull(), // References Inventory service, but no FK constraint
quantity: decimal('quantity').notNull(),
unitPrice: decimal('unit_price', { precision: 10, scale: 2 }).notNull(),
});Notice that productId in the Order Service has no foreign key constraint to the Inventory Service's database. It stores the product ID as a reference, but the constraint is enforced at the application level, not the database level. This is a fundamental shift in thinking.
Polyglot Persistence
One of the key advantages of database-per-service is that each service can choose the database technology that best fits its access patterns. This is called polyglot persistence.
// User Service - PostgreSQL for relational user data
// Good fit: structured data, complex queries, ACID transactions
// Product Catalog - MongoDB for flexible product schemas
// Good fit: varying product attributes, nested documents, read-heavy
// Session Store - Redis for fast key-value access
// Good fit: ephemeral data, sub-millisecond reads, TTL-based expiry
// Search Service - Elasticsearch for full-text search
// Good fit: text search, faceted filtering, analytics
// Activity Feed - Cassandra for time-series write-heavy workloads
// Good fit: append-heavy writes, time-ordered data, horizontal scalingThe important thing is that polyglot persistence is a benefit you can adopt gradually. You do not need to use five different databases on day one. Start with PostgreSQL everywhere, and introduce specialized databases only when a service has access patterns that genuinely benefit from a different technology.
Eventual Consistency and the Saga Pattern
Without a shared database, you lose ACID transactions that span multiple services. The saga pattern replaces distributed transactions with a sequence of local transactions coordinated through events.
Consider placing an order that must reserve inventory and charge payment:
// Orchestration-based saga
class PlaceOrderSaga {
private steps: SagaStep[] = [
{
execute: async (context) => {
const order = await orderService.createOrder(context.orderData);
context.orderId = order.id;
return order;
},
compensate: async (context) => {
await orderService.cancelOrder(context.orderId);
},
},
{
execute: async (context) => {
const reservation = await inventoryService.reserveStock({
items: context.orderData.items,
orderId: context.orderId,
});
context.reservationId = reservation.id;
return reservation;
},
compensate: async (context) => {
await inventoryService.releaseReservation(context.reservationId);
},
},
{
execute: async (context) => {
const payment = await paymentService.charge({
customerId: context.orderData.customerId,
amount: context.orderData.totalAmount,
orderId: context.orderId,
});
context.paymentId = payment.id;
return payment;
},
compensate: async (context) => {
await paymentService.refund(context.paymentId);
},
},
];
async execute(orderData: OrderData) {
const context: SagaContext = { orderData };
const completedSteps: SagaStep[] = [];
for (const step of this.steps) {
try {
await step.execute(context);
completedSteps.push(step);
} catch (error) {
// Compensate in reverse order
for (const completed of completedSteps.reverse()) {
await completed.compensate(context);
}
throw new SagaFailedError('Order placement failed', error);
}
}
return context.orderId;
}
}There are two saga approaches: orchestration (a central coordinator drives the sequence, as shown above) and choreography (each service emits events and the next service reacts). Orchestration is easier to reason about and debug. Choreography scales better but can become difficult to trace when you have many services.
API Composition for Cross-Service Queries
When a client needs data from multiple services, you cannot join across databases. The API composition pattern solves this by having a composite service aggregate the data.
// BFF or API Gateway composing data from multiple services
class OrderDetailsComposer {
async getOrderDetails(orderId: string): Promise<OrderDetailsResponse> {
// Fetch from multiple services in parallel
const [order, customer, shipment] = await Promise.all([
this.orderService.getOrder(orderId),
this.orderService.getOrder(orderId).then(o =>
this.customerService.getCustomer(o.customerId)
),
this.shippingService.getShipmentByOrderId(orderId),
]);
// Enrich order items with product details
const productIds = order.items.map(item => item.productId);
const products = await this.catalogService.getProductsByIds(productIds);
const enrichedItems = order.items.map(item => ({
...item,
productName: products.find(p => p.id === item.productId)?.name,
productImage: products.find(p => p.id === item.productId)?.imageUrl,
}));
return {
order: { ...order, items: enrichedItems },
customer,
shipment,
};
}
}For complex reporting queries that would traditionally use SQL JOINs across many tables, consider implementing CQRS (Command Query Responsibility Segregation). Services publish domain events, and a dedicated read-model service subscribes to those events and builds denormalized views optimized for queries.
Practical Implementation
Setting Up Isolated Databases
In practice, "database per service" can mean separate database servers, separate databases on the same server, or even separate schemas within the same database. The key requirement is that only the owning service has credentials to access its data.
// docker-compose.yml excerpt showing isolated databases
// Each service gets its own PostgreSQL instance
const serviceConfigs = {
orderDb: {
host: process.env.ORDER_DB_HOST,
port: 5432,
database: 'orders',
user: 'order_svc', // Service-specific credentials
password: process.env.ORDER_DB_PASSWORD,
},
inventoryDb: {
host: process.env.INVENTORY_DB_HOST,
port: 5432,
database: 'inventory',
user: 'inventory_svc',
password: process.env.INVENTORY_DB_PASSWORD,
},
};Event-Driven Data Synchronization
When services need to react to changes in other services, use domain events published through a message broker:
// Order Service publishes events when order state changes
class OrderService {
async createOrder(data: CreateOrderInput) {
const order = await this.db.insert(orders).values(data).returning();
// Publish domain event
await this.eventBus.publish('order.created', {
orderId: order.id,
customerId: data.customerId,
items: data.items,
totalAmount: data.totalAmount,
timestamp: new Date().toISOString(),
});
return order;
}
}
// Inventory Service subscribes and reacts
class InventoryEventHandler {
@Subscribe('order.created')
async onOrderCreated(event: OrderCreatedEvent) {
for (const item of event.items) {
await this.db
.update(stockLevels)
.set({
reserved: sql`reserved + ${item.quantity}`,
available: sql`available - ${item.quantity}`,
})
.where(eq(stockLevels.productId, item.productId));
}
}
}Handling Data Duplication
Services often need to store a local copy of data owned by another service. This is not only acceptable but expected. The Shipping Service might store the customer's delivery address locally rather than querying the Customer Service on every request. The key rule is that the owning service is the source of truth, and copies are updated via events.
// Shipping Service maintains a local copy of customer addresses
// Updated via events from Customer Service
class CustomerAddressProjection {
@Subscribe('customer.address.updated')
async onAddressUpdated(event: AddressUpdatedEvent) {
await this.db
.insert(customerAddresses)
.values({
customerId: event.customerId,
street: event.street,
city: event.city,
postalCode: event.postalCode,
country: event.country,
})
.onConflictDoUpdate({
target: customerAddresses.customerId,
set: {
street: event.street,
city: event.city,
postalCode: event.postalCode,
country: event.country,
},
});
}
}Common Pitfalls
Starting with too many databases too early. If you have a small team and a handful of services, the operational overhead of managing separate databases can outweigh the benefits. Start with separate schemas in a shared database server and graduate to separate servers when team or data scale demands it.
Ignoring idempotency in event handlers. Events can be delivered more than once. Every event handler must be idempotent, meaning processing the same event twice produces the same result. Use event IDs and deduplication tables.
Synchronous cross-service calls in sagas. If the Payment Service is down during an order saga, the entire order flow blocks. Design sagas to handle partial failures gracefully and consider using asynchronous communication where real-time responses are not required.
Not planning for data consistency debugging. When data is eventually consistent across services, debugging inconsistencies is harder than with a single database. Invest in correlation IDs, distributed tracing, and event sourcing to make the system observable.
Treating shared reference data as service-owned data. Static reference data like country codes, currency lists, or product categories does not need the same level of isolation. A shared configuration service or even a shared read-only database for reference data is a pragmatic choice.
When to Use (and When Not To)
Use database-per-service when:
- ›Multiple teams need to deploy and evolve their services independently
- ›Services have fundamentally different data access patterns that benefit from different database technologies
- ›You need to scale individual services independently based on their specific load characteristics
- ›Strong data ownership boundaries align with your domain boundaries
Consider alternatives when:
- ›You have a small team (fewer than ten developers) working on a single product
- ›Your services share heavily overlapping data models with strong consistency requirements
- ›You are early in your microservices journey and still discovering service boundaries
- ›The operational cost of managing multiple databases exceeds your infrastructure team's capacity
The database-per-service pattern is not a prerequisite for microservices. It is an optimization for organizational independence that comes with real technical costs. Adopt it when the organizational benefits justify those costs.
FAQ
What is the database-per-service pattern?
The database-per-service pattern assigns each microservice its own dedicated database, ensuring that no two services share the same data store. This enforces loose coupling and allows each service to choose the database technology best suited to its needs, while requiring that all cross-service data access happens through well-defined APIs.
How do you handle cross-service queries with separate databases?
Cross-service queries are handled using the API composition pattern, where a composite service aggregates data from multiple services via their APIs. For complex reporting needs, implement CQRS with a dedicated read model that subscribes to domain events and builds denormalized views optimized for queries.
What is the saga pattern in microservices?
The saga pattern manages distributed transactions by breaking them into a sequence of local transactions, each within a single service. If any step fails, compensating transactions are executed in reverse order to undo the preceding steps. This replaces traditional two-phase commit protocols that do not scale well in distributed systems.
Can you use the same database server for multiple services?
Yes. Database-per-service does not require separate physical servers. You can use separate schemas or separate logical databases on the same server, as long as each service has exclusive access to its own data and cannot read or write another service's tables directly. The logical separation is what matters, not the physical separation.
Collaboration
Need help with a project?
Let's Build It
I help startups and established companies design, build, and scale world-class digital products. From deep technical architecture to pixel-perfect UI — let's bring your vision to life.
Related Articles
How to Design API Contracts Between Micro-Frontends and BFFs
Learn how to design stable API contracts between Micro-Frontends and Backend-for-Frontend layers with versioning, ownership boundaries, error handling, and schema governance.
Next.js BFF Architecture
An architectural deep dive into using Next.js as a Backend-for-Frontend, including route handlers, server components, auth boundaries, caching, and service orchestration.
Next.js Cache Components and PPR in Real Apps
A practical guide to using Next.js Cache Components and Partial Prerendering in real applications, with tradeoffs, cache strategy, and freshness considerations.