Scaling a Fintech Dashboard to 10k Active Users
How we leveraged Redis, WebSockets, and heavily memoized React components to deliver real-time financial data.
Tags
Scaling a Fintech Dashboard to 10k Active Users
TL;DR
Switching from long-polling to a Redis pub/sub model with WebSocket distribution cut server CPU dramatically, while aggressive React memoization handled hundreds of real-time price updates per second without degrading UI performance. This case study covers the full journey from a dashboard that buckled under 500 concurrent users to one that comfortably handled 10,000+.
The Challenge
A fintech startup had built a portfolio tracking dashboard that displayed real-time cryptocurrency and equity prices, portfolio valuations, P&L calculations, and trade execution interfaces. The initial architecture worked fine during beta with a few hundred users. When they launched publicly and user count climbed past 500 concurrent connections, everything broke.
The symptoms were severe:
- ›Server CPU spiked to 100% during market hours. The API servers were overwhelmed by polling requests from every connected client.
- ›Price updates lagged by 5-15 seconds. In financial applications, stale prices are not just a UX problem, they are a liability. Users were seeing outdated prices and making trade decisions based on incorrect data.
- ›The React frontend stuttered. When prices did update, the dashboard would freeze momentarily as hundreds of components re-rendered simultaneously.
- ›Database connections were exhausted. Every polling request hit the database to fetch the latest prices, and the connection pool was permanently saturated.
The core technical challenge was threefold: get real-time data from the market data providers to the server efficiently, distribute that data to thousands of connected clients simultaneously, and render hundreds of rapid-fire updates on the frontend without dropping frames.
The Architecture
The Polling Problem
The original architecture was straightforward and fatally flawed:
- ›The server polled market data APIs every 2 seconds and stored prices in PostgreSQL.
- ›Each client polled the server's REST API every 3 seconds to fetch the latest prices for their portfolio.
- ›The React frontend replaced the entire portfolio state on each poll, triggering a full re-render.
The math was brutal. With 1,000 concurrent users, each polling every 3 seconds, the server handled 333 requests per second just for price data. Each request queried PostgreSQL for the user's portfolio holdings, joined against the latest prices, and computed P&L. Under load, these queries took 50-200ms each, meaning the server needed sustained capacity for 333 concurrent database operations.
Solution: Redis Pub/Sub + WebSockets
The architectural redesign separated the problem into three layers: data ingestion, distribution, and rendering.
Layer 1: Market Data Ingestion
Instead of storing prices in PostgreSQL and querying them on every client request, I set up a dedicated ingestion service that received market data from the provider's WebSocket feed and published price updates to Redis pub/sub channels:
import Redis from 'ioredis';
import WebSocket from 'ws';
const redis = new Redis(process.env.REDIS_URL);
// Connect to market data provider's WebSocket feed
const marketFeed = new WebSocket(process.env.MARKET_DATA_WS_URL);
marketFeed.on('message', (data) => {
const priceUpdate = JSON.parse(data.toString());
// Publish to a channel per asset
redis.publish(
`prices:${priceUpdate.symbol}`,
JSON.stringify({
symbol: priceUpdate.symbol,
price: priceUpdate.price,
timestamp: priceUpdate.timestamp,
change24h: priceUpdate.change24h,
})
);
// Also update the latest price in a Redis hash for new connections
redis.hset('latest_prices', priceUpdate.symbol, JSON.stringify(priceUpdate));
});The ingestion service processed the raw market feed once, regardless of how many clients were connected. This was the key architectural insight: decouple data ingestion from data distribution.
Layer 2: WebSocket Distribution
The WebSocket server subscribed to Redis pub/sub channels based on what assets each connected client cared about, then forwarded relevant updates:
import { WebSocketServer } from 'ws';
import Redis from 'ioredis';
const wss = new WebSocketServer({ port: 8080 });
wss.on('connection', async (ws, req) => {
const userId = authenticateConnection(req);
const userHoldings = await getUserHoldings(userId);
const symbols = userHoldings.map((h) => h.symbol);
// Create a dedicated Redis subscriber for this connection
const subscriber = new Redis(process.env.REDIS_URL);
// Subscribe to channels for the user's portfolio assets
for (const symbol of symbols) {
subscriber.subscribe(`prices:${symbol}`);
}
// Send initial prices from Redis hash
const latestPrices = await redis.hmget('latest_prices', ...symbols);
ws.send(JSON.stringify({ type: 'initial', prices: latestPrices.filter(Boolean).map(JSON.parse) }));
// Forward relevant price updates
subscriber.on('message', (channel, message) => {
if (ws.readyState === WebSocket.OPEN) {
ws.send(JSON.stringify({ type: 'price_update', data: JSON.parse(message) }));
}
});
ws.on('close', () => {
subscriber.disconnect();
});
});A critical optimization was avoiding one Redis subscriber per connection at scale. With 10,000 clients, that would mean 10,000 Redis subscriber connections. Instead, I implemented a shared subscription model where a single Redis subscriber handled all clients, and the server maintained an in-memory routing table to determine which clients cared about which symbols:
const redis = new Redis(process.env.REDIS_URL);
const subscriber = new Redis(process.env.REDIS_URL);
// Map: symbol -> Set of WebSocket connections
const subscriptions = new Map<string, Set<WebSocket>>();
subscriber.on('message', (channel, message) => {
const symbol = channel.replace('prices:', '');
const clients = subscriptions.get(symbol);
if (clients) {
const payload = JSON.stringify({ type: 'price_update', data: JSON.parse(message) });
for (const ws of clients) {
if (ws.readyState === WebSocket.OPEN) {
ws.send(payload);
}
}
}
});
function subscribeClient(ws: WebSocket, symbols: string[]) {
for (const symbol of symbols) {
if (!subscriptions.has(symbol)) {
subscriptions.set(symbol, new Set());
subscriber.subscribe(`prices:${symbol}`);
}
subscriptions.get(symbol)!.add(ws);
}
}
function unsubscribeClient(ws: WebSocket, symbols: string[]) {
for (const symbol of symbols) {
const clients = subscriptions.get(symbol);
if (clients) {
clients.delete(ws);
if (clients.size === 0) {
subscriptions.delete(symbol);
subscriber.unsubscribe(`prices:${symbol}`);
}
}
}
}This reduced Redis subscriber connections from N (number of clients) to 1, while maintaining per-client relevance filtering.
Layer 3: Frontend Rendering Optimization
With the backend streaming hundreds of price updates per second to the client, the React frontend became the next bottleneck. A naive implementation would re-render the entire portfolio table on every price tick.
React Performance: Surgical Re-renders
The portfolio dashboard displayed a table of 20-50 holdings, each showing current price, P&L, and allocation percentage. With prices updating multiple times per second per asset, React's default reconciliation would re-render every row on every update.
Strategy 1: Normalize state and memoize aggressively.
I structured the state as a normalized map keyed by symbol, rather than an array of holdings:
interface PriceState {
[symbol: string]: {
price: number;
change24h: number;
timestamp: number;
};
}
function priceReducer(state: PriceState, action: PriceAction): PriceState {
switch (action.type) {
case 'PRICE_UPDATE':
return {
...state,
[action.payload.symbol]: {
price: action.payload.price,
change24h: action.payload.change24h,
timestamp: action.payload.timestamp,
},
};
case 'BATCH_UPDATE':
const updates: PriceState = {};
for (const update of action.payload) {
updates[update.symbol] = {
price: update.price,
change24h: update.change24h,
timestamp: update.timestamp,
};
}
return { ...state, ...updates };
default:
return state;
}
}Strategy 2: Memoized row components with precise dependency tracking.
Each row component received only the data it needed and was wrapped in React.memo with a custom comparison:
interface HoldingRowProps {
symbol: string;
quantity: number;
costBasis: number;
currentPrice: number;
change24h: number;
}
const HoldingRow = React.memo(function HoldingRow({
symbol,
quantity,
costBasis,
currentPrice,
change24h,
}: HoldingRowProps) {
const marketValue = useMemo(() => quantity * currentPrice, [quantity, currentPrice]);
const pnl = useMemo(() => marketValue - costBasis, [marketValue, costBasis]);
const pnlPercent = useMemo(
() => (costBasis > 0 ? ((marketValue - costBasis) / costBasis) * 100 : 0),
[marketValue, costBasis]
);
return (
<tr>
<td className="font-mono">{symbol}</td>
<td>{quantity.toFixed(4)}</td>
<td>${currentPrice.toLocaleString()}</td>
<td className={change24h >= 0 ? 'text-green-500' : 'text-red-500'}>
{change24h >= 0 ? '+' : ''}{change24h.toFixed(2)}%
</td>
<td>${marketValue.toLocaleString()}</td>
<td className={pnl >= 0 ? 'text-green-500' : 'text-red-500'}>
${pnl.toLocaleString()} ({pnlPercent.toFixed(2)}%)
</td>
</tr>
);
});When BTC's price updates, only the BTC row re-renders. The ETH, SOL, and every other row remain untouched because their props have not changed.
Strategy 3: Batching updates with requestAnimationFrame.
Even with memoization, dispatching 50 individual price updates in rapid succession causes 50 separate React render cycles. I batched incoming WebSocket messages and dispatched them as a single state update aligned with the browser's animation frame:
function useWebSocketPrices(url: string) {
const [prices, dispatch] = useReducer(priceReducer, {});
const batchRef = useRef<PriceUpdate[]>([]);
const rafRef = useRef<number | null>(null);
useEffect(() => {
const ws = new WebSocket(url);
ws.onmessage = (event) => {
const data = JSON.parse(event.data);
if (data.type === 'price_update') {
batchRef.current.push(data.data);
if (rafRef.current === null) {
rafRef.current = requestAnimationFrame(() => {
dispatch({ type: 'BATCH_UPDATE', payload: batchRef.current });
batchRef.current = [];
rafRef.current = null;
});
}
}
};
return () => {
ws.close();
if (rafRef.current) cancelAnimationFrame(rafRef.current);
};
}, [url]);
return prices;
}This collapses all price updates that arrive within a single animation frame (~16ms) into one dispatch and one render cycle. If 30 prices update within 16ms, that is 1 render instead of 30.
Key Decisions & Trade-offs
Redis pub/sub over Kafka. Kafka provides persistence, replay, and stronger delivery guarantees. For real-time price feeds, these properties are unnecessary. Prices become stale immediately, so replaying old messages is counterproductive. Redis pub/sub is simpler to operate and has lower latency. If the system needed to support trade execution with guaranteed delivery, Kafka would have been the right choice.
Single shared Redis subscriber over per-client subscribers. The shared model adds complexity (maintaining the routing map, handling cleanup on disconnect) but reduces Redis connections from O(n) to O(1). This was essential for scaling past a few thousand clients.
requestAnimationFrame batching over React 18 automatic batching. React 18 automatically batches state updates within event handlers and microtasks, but WebSocket messages arrive outside React's batching scope. The RAF-based batching gave me explicit control over the update cadence and ensured renders aligned with the display refresh rate.
Normalized state over array state. A normalized object keyed by symbol allows O(1) lookups and updates. With an array, each price update would require finding the matching element (O(n)), and React's reconciliation would diff the entire array. The normalized approach makes state updates and React rendering both more efficient.
Not using a state management library. I evaluated Zustand and Jotai but decided that useReducer + React.memo was sufficient for this use case. The price state had a simple shape and predictable update patterns. Adding a library would have introduced indirection without meaningful benefit.
Results & Outcomes
The architecture redesign transformed the platform's scalability profile:
- ›Server CPU dropped from 100% saturation to comfortable headroom under full load. The elimination of per-client polling and database queries for price lookups removed the primary bottleneck.
- ›Price update latency went from 5-15 seconds to under 100ms. Users saw price changes within a fraction of a second of the market data feed updating.
- ›The frontend maintained 60fps even during volatile market periods with hundreds of price updates per second. The combination of memoization, normalized state, and RAF batching kept render times well under the 16ms frame budget.
- ›Database load dropped precipitously. PostgreSQL was freed from serving price queries and could focus on its actual responsibilities: storing user portfolios, trade history, and account data.
The system comfortably handled 10,000 concurrent WebSocket connections on a single server instance. The shared Redis subscriber model meant that scaling to additional server instances would be straightforward: each server subscribes to the same Redis channels and manages its own local client routing table.
What I'd Do Differently
Implement connection health monitoring from the start. WebSocket connections silently drop more often than you would expect. I would add heartbeat/ping-pong monitoring from day one and implement automatic reconnection with exponential backoff on the client side. We retrofitted this after users reported frozen dashboards that required a page refresh.
Use a binary protocol instead of JSON for price updates. JSON serialization and parsing adds overhead when processing hundreds of messages per second. A binary format like MessagePack or Protocol Buffers would reduce message size and parsing CPU. For the volumes we handled, JSON was fine, but at higher scale it would become a measurable cost.
Add client-side price interpolation. When a WebSocket reconnects after a brief disconnection, the price jumps abruptly from the old value to the current value. Implementing smooth animation or interpolation between price values would make the transition less jarring for users.
Load test earlier and more aggressively. I should have set up load testing with thousands of simulated WebSocket connections from the beginning. The scaling problems were predictable in retrospect and would have been caught before real users experienced them. Tools like k6 with WebSocket support make this straightforward to automate.
FAQ
Why is long-polling bad for real-time financial dashboards?
Long-polling creates excessive server load because each client repeatedly opens new HTTP connections to check for updates, which quickly overwhelms servers when thousands of users need simultaneous real-time data. Each polling request carries the full overhead of HTTP: TCP connection setup, header parsing, authentication, database queries, and response serialization. Multiply this by thousands of clients polling every few seconds, and the server spends most of its resources handling connection overhead rather than delivering data. WebSockets maintain a persistent connection, eliminating the per-request overhead entirely. A single WebSocket connection can efficiently deliver thousands of updates over its lifetime with minimal overhead per message.
How does Redis pub/sub improve real-time data delivery?
Redis pub/sub allows the server to broadcast price updates once to all subscribed WebSocket connections simultaneously, eliminating redundant database queries and reducing CPU usage dramatically. Without pub/sub, each client request triggers its own database query to fetch the latest prices. With 10,000 clients, that is 10,000 identical queries returning the same data. Redis pub/sub inverts this model: the market data ingestion service publishes each price update exactly once, and the WebSocket server receives it once and fans it out to all relevant clients from memory. The database is completely removed from the real-time data path. Redis's pub/sub is also exceptionally fast, with message delivery typically under 1ms, making it well-suited for latency-sensitive financial data.
How do you optimize React for hundreds of updates per second?
Use React.memo to prevent unnecessary re-renders, useMemo for expensive calculations, and aggressive reconciliation management to ensure only the specific components with changed data re-render. The key is structural: normalize your state by key (not arrays) so updates are O(1) lookups, wrap each data-displaying component in React.memo so only components whose specific data changed will re-render, and batch incoming updates using requestAnimationFrame to coalesce multiple rapid-fire state changes into a single render cycle. Without batching, 50 price updates arriving within 16ms would cause 50 separate renders. With RAF batching, they collapse into one render that updates all 50 values simultaneously, staying well within the browser's frame budget.
Collaboration
Need help with a project?
Let's Build It
I help startups and established companies design, build, and scale world-class digital products. From deep technical architecture to pixel-perfect UI — let's bring your vision to life.
Related Articles
Optimizing Core Web Vitals for e-Commerce
Our journey to scoring 100 on Google PageSpeed Insights for a major Shopify-backed e-commerce platform.
Building an AI-Powered Interview Feedback System
How we built an AI-powered system that analyzes mock interview recordings and generates structured feedback on communication, technical accuracy, and problem-solving approach using LLMs.
Migrating from Pages to App Router
A detailed post-mortem on migrating a massive enterprise dashboard from Next.js Pages Router to the App Router.