Blog/Deep Dives/WebSockets vs SSE vs Polling: Choosing Real-Time Communication
POST
June 28, 2025
LAST UPDATEDJune 28, 2025

WebSockets vs SSE vs Polling: Choosing Real-Time Communication

A practical comparison of WebSockets, Server-Sent Events, and polling for real-time web apps, with code examples and scaling considerations.

Tags

WebSocketsSSEReal-TimeBackend
WebSockets vs SSE vs Polling: Choosing Real-Time Communication
7 min read

WebSockets vs SSE vs Polling: Choosing Real-Time Communication

WebSockets give you full-duplex, bidirectional communication. Server-Sent Events (SSE) give you efficient server-to-client push over plain HTTP. Polling gives you simplicity at the cost of latency. The right choice depends on your data flow direction, update frequency, infrastructure constraints, and how much complexity you are willing to manage. Most applications do not need WebSockets, and many would be better served by SSE or even simple polling.

TL;DR

WebSockets are for bidirectional, low-latency communication like chat or multiplayer games. SSE is for server-to-client streaming like live feeds, notifications, and dashboards. Polling is for infrequent updates where simplicity matters most. Choose the simplest option that meets your requirements. SSE is dramatically underused relative to its capabilities.

Why This Matters

Real-time features are expected in modern applications. Users want live notifications, collaborative editing, real-time dashboards, and instant messaging. But "real-time" does not always mean WebSockets. Choosing the wrong transport mechanism leads to unnecessary complexity, scaling challenges, and infrastructure costs that a simpler approach would have avoided.

The decision has downstream implications for your entire stack. WebSockets require sticky sessions, connection-state management, and WebSocket-aware load balancers. SSE works with standard HTTP infrastructure. Polling works with everything. Understanding these tradeoffs before you build is far cheaper than discovering them in production.

How It Works

Short Polling

Short polling is the simplest approach: the client makes periodic HTTP requests to check for new data.

typescript
// Client-side short polling
class ShortPoller {
  private intervalId: ReturnType<typeof setInterval> | null = null;
 
  start(url: string, intervalMs: number, onData: (data: any) => void) {
    this.intervalId = setInterval(async () => {
      try {
        const response = await fetch(url);
        const data = await response.json();
        onData(data);
      } catch (error) {
        console.error('Polling error:', error);
      }
    }, intervalMs);
  }
 
  stop() {
    if (this.intervalId) {
      clearInterval(this.intervalId);
      this.intervalId = null;
    }
  }
}
 
// Usage
const poller = new ShortPoller();
poller.start('/api/notifications', 5000, (data) => {
  updateNotificationBadge(data.unreadCount);
});
typescript
// Server-side endpoint for polling
// Next.js API route
export async function GET(request: Request) {
  const userId = await getUserId(request);
  const notifications = await db
    .select()
    .from(notificationsTable)
    .where(eq(notificationsTable.userId, userId))
    .orderBy(desc(notificationsTable.createdAt))
    .limit(20);
 
  return Response.json({
    notifications,
    unreadCount: notifications.filter(n => !n.read).length,
  });
}

The problem with short polling is wasted requests. If you poll every 5 seconds and updates happen once per minute, eleven out of twelve requests return unchanged data. This wastes bandwidth, server resources, and battery on mobile devices.

Long Polling

Long polling improves on short polling by holding the connection open until new data is available or a timeout occurs.

typescript
// Server-side long polling
export async function GET(request: Request) {
  const userId = await getUserId(request);
  const lastEventId = request.headers.get('Last-Event-ID') || '0';
  const timeoutMs = 30000;
 
  // Wait for new data or timeout
  const data = await waitForNewEvents(userId, lastEventId, timeoutMs);
 
  if (data) {
    return Response.json({ events: data, hasMore: false });
  }
 
  // Timeout - return empty response, client will reconnect
  return Response.json({ events: [], hasMore: false });
}
 
async function waitForNewEvents(
  userId: string,
  afterId: string,
  timeoutMs: number
): Promise<Event[] | null> {
  return new Promise((resolve) => {
    const timeout = setTimeout(() => resolve(null), timeoutMs);
 
    const unsubscribe = eventEmitter.on(`user:${userId}`, (events) => {
      clearTimeout(timeout);
      unsubscribe();
      resolve(events);
    });
  });
}

Long polling reduces unnecessary requests but still creates a new HTTP connection for each update cycle. Each reconnection involves TCP handshake overhead, and the server must manage many open connections waiting for data.

Server-Sent Events (SSE)

SSE provides a persistent, unidirectional connection from server to client using standard HTTP. The browser's EventSource API handles reconnection automatically.

typescript
// Server-side SSE with Next.js Route Handler
export async function GET(request: Request) {
  const userId = await getUserId(request);
 
  const stream = new ReadableStream({
    start(controller) {
      const encoder = new TextEncoder();
 
      // Send initial connection event
      controller.enqueue(
        encoder.encode(`event: connected\ndata: {"status":"ok"}\n\n`)
      );
 
      // Set up event listener for this user
      const handler = (event: AppEvent) => {
        const data = JSON.stringify(event);
        controller.enqueue(
          encoder.encode(`id: ${event.id}\nevent: ${event.type}\ndata: ${data}\n\n`)
        );
      };
 
      eventEmitter.on(`user:${userId}`, handler);
 
      // Send periodic keepalive comments
      const keepalive = setInterval(() => {
        controller.enqueue(encoder.encode(': keepalive\n\n'));
      }, 15000);
 
      // Cleanup on disconnect
      request.signal.addEventListener('abort', () => {
        eventEmitter.off(`user:${userId}`, handler);
        clearInterval(keepalive);
        controller.close();
      });
    },
  });
 
  return new Response(stream, {
    headers: {
      'Content-Type': 'text/event-stream',
      'Cache-Control': 'no-cache',
      'Connection': 'keep-alive',
    },
  });
}
typescript
// Client-side SSE
class NotificationStream {
  private eventSource: EventSource | null = null;
 
  connect(onNotification: (event: AppEvent) => void) {
    this.eventSource = new EventSource('/api/notifications/stream');
 
    this.eventSource.addEventListener('notification', (e) => {
      const data = JSON.parse(e.data);
      onNotification(data);
    });
 
    this.eventSource.addEventListener('connected', () => {
      console.log('SSE connection established');
    });
 
    this.eventSource.onerror = (error) => {
      console.error('SSE error:', error);
      // EventSource automatically reconnects with Last-Event-ID header
    };
  }
 
  disconnect() {
    this.eventSource?.close();
    this.eventSource = null;
  }
}
 
// Usage in React
function useNotifications() {
  const [notifications, setNotifications] = useState<AppEvent[]>([]);
 
  useEffect(() => {
    const stream = new NotificationStream();
    stream.connect((event) => {
      setNotifications(prev => [event, ...prev]);
    });
    return () => stream.disconnect();
  }, []);
 
  return notifications;
}

SSE has built-in features that developers often overlook: automatic reconnection with configurable retry intervals, the Last-Event-ID header for resuming from where the client left off, and named event types for routing different kinds of messages.

WebSockets

WebSockets upgrade an HTTP connection to a persistent, full-duplex TCP connection. Both client and server can send messages at any time.

typescript
// Server-side WebSocket with ws library
import { WebSocketServer, WebSocket } from 'ws';
 
const wss = new WebSocketServer({ port: 8080 });
 
const clients = new Map<string, Set<WebSocket>>();
 
wss.on('connection', (ws, request) => {
  const userId = authenticateFromRequest(request);
  if (!userId) {
    ws.close(4001, 'Unauthorized');
    return;
  }
 
  // Track connected clients
  if (!clients.has(userId)) {
    clients.set(userId, new Set());
  }
  clients.get(userId)!.add(ws);
 
  // Handle incoming messages from client
  ws.on('message', async (raw) => {
    try {
      const message = JSON.parse(raw.toString());
 
      switch (message.type) {
        case 'chat:send':
          await handleChatMessage(userId, message.payload);
          break;
        case 'typing:start':
          broadcastToRoom(message.payload.roomId, {
            type: 'typing:indicator',
            userId,
            isTyping: true,
          });
          break;
        case 'presence:ping':
          updatePresence(userId);
          break;
      }
    } catch (error) {
      ws.send(JSON.stringify({ type: 'error', message: 'Invalid message' }));
    }
  });
 
  ws.on('close', () => {
    clients.get(userId)?.delete(ws);
    if (clients.get(userId)?.size === 0) {
      clients.delete(userId);
    }
  });
 
  // Send queued messages
  sendQueuedMessages(userId, ws);
});
 
function broadcastToRoom(roomId: string, message: object) {
  const roomMembers = getRoomMembers(roomId);
  const payload = JSON.stringify(message);
 
  for (const memberId of roomMembers) {
    const memberSockets = clients.get(memberId);
    memberSockets?.forEach(ws => {
      if (ws.readyState === WebSocket.OPEN) {
        ws.send(payload);
      }
    });
  }
}
typescript
// Client-side WebSocket with reconnection
class ChatConnection {
  private ws: WebSocket | null = null;
  private reconnectAttempts = 0;
  private maxReconnectAttempts = 5;
  private handlers = new Map<string, Function[]>();
 
  connect(token: string) {
    this.ws = new WebSocket(`wss://api.example.com/ws?token=${token}`);
 
    this.ws.onopen = () => {
      this.reconnectAttempts = 0;
      this.emit('connected', {});
    };
 
    this.ws.onmessage = (event) => {
      const message = JSON.parse(event.data);
      this.emit(message.type, message);
    };
 
    this.ws.onclose = (event) => {
      if (event.code !== 1000 && this.reconnectAttempts < this.maxReconnectAttempts) {
        const delay = Math.min(1000 * Math.pow(2, this.reconnectAttempts), 30000);
        this.reconnectAttempts++;
        setTimeout(() => this.connect(token), delay);
      }
    };
  }
 
  send(type: string, payload: object) {
    if (this.ws?.readyState === WebSocket.OPEN) {
      this.ws.send(JSON.stringify({ type, payload }));
    }
  }
 
  on(event: string, handler: Function) {
    if (!this.handlers.has(event)) this.handlers.set(event, []);
    this.handlers.get(event)!.push(handler);
  }
 
  private emit(event: string, data: any) {
    this.handlers.get(event)?.forEach(h => h(data));
  }
}

Practical Implementation

Comparison Table

FeatureShort PollingLong PollingSSEWebSockets
DirectionClient to serverClient to serverServer to clientBidirectional
ProtocolHTTPHTTPHTTPWS (TCP)
ConnectionNew per requestHeld openPersistentPersistent
Auto-reconnectN/A (client-driven)ManualBuilt-inManual
Binary dataVia encodingVia encodingText onlyNative support
Browser supportUniversalUniversalAll modern browsersAll modern browsers
HTTP/2 multiplexingYesYesYesNo (separate TCP)
Proxy-friendlyYesMostlyYesSometimes problematic
Max connectionsN/A~6 per domain~6 per domain (HTTP/1.1)No browser limit
Scaling complexityLowMediumLow-MediumHigh

Scaling Considerations

SSE and polling work with standard HTTP load balancers and CDNs. WebSockets require Layer 4 load balancing or sticky sessions because the connection is stateful.

typescript
// Scaling WebSockets with Redis pub/sub
import Redis from 'ioredis';
 
const publisher = new Redis(process.env.REDIS_URL);
const subscriber = new Redis(process.env.REDIS_URL);
 
// When a message needs to reach users on other server instances
async function publishMessage(channel: string, message: object) {
  await publisher.publish(channel, JSON.stringify(message));
}
 
// Each server instance subscribes to relevant channels
subscriber.subscribe('chat:*', 'notifications:*');
subscriber.on('message', (channel, data) => {
  const message = JSON.parse(data);
  // Deliver to locally connected clients
  const localClients = getLocalClientsForChannel(channel);
  localClients.forEach(ws => ws.send(data));
});

With SSE, scaling is simpler because you can use standard HTTP/2 multiplexing, and many SSE connections can share a single TCP connection. This significantly reduces the per-connection overhead compared to WebSockets.

Choosing the Right Approach

Use this decision framework based on your actual requirements:

typescript
// Decision helper - not actual code, but a useful mental model
function chooseRealTimeTransport(requirements: Requirements): Transport {
  // Does the client need to send real-time data to the server?
  if (requirements.bidirectional) {
    // Is it high-frequency like gaming or collaborative editing?
    if (requirements.lowLatency && requirements.highFrequency) {
      return 'WebSocket';
    }
    // Can client-to-server use regular HTTP POST?
    if (requirements.clientToServerIsInfrequent) {
      return 'SSE + HTTP POST'; // Best of both worlds
    }
    return 'WebSocket';
  }
 
  // Server-to-client only
  if (requirements.updateFrequency === 'continuous') {
    return 'SSE'; // Live feeds, dashboards, logs
  }
 
  if (requirements.updateFrequency === 'periodic') {
    if (requirements.intervalSeconds > 30) {
      return 'Short Polling'; // Simple and effective
    }
    return 'SSE'; // More efficient than rapid polling
  }
 
  return 'Short Polling'; // Simplest option for rare updates
}

A pattern that is often overlooked: combining SSE for server-to-client push with standard HTTP POST for client-to-server messages. This gives you real-time updates without the operational complexity of WebSockets, and it works seamlessly with existing HTTP infrastructure.

Common Pitfalls

Choosing WebSockets when SSE would suffice. If data only flows from server to client (notifications, live scores, stock tickers), SSE is simpler to implement, simpler to scale, and works with standard HTTP infrastructure. Reserve WebSockets for genuinely bidirectional use cases.

Ignoring the connection limit. Browsers limit simultaneous HTTP connections to the same domain (typically six for HTTP/1.1). If you open multiple SSE connections or long-polling connections, you can exhaust this limit. Use HTTP/2 or consolidate streams into a single multiplexed connection.

Not implementing heartbeats. Proxies and load balancers often close idle connections after 60-120 seconds. Send periodic keepalive messages (comments in SSE, ping frames in WebSockets) to keep connections alive.

Missing reconnection logic in WebSockets. Unlike SSE, WebSockets do not reconnect automatically. You must implement exponential backoff reconnection, message queuing during disconnection, and state reconciliation after reconnection.

Not handling message ordering and deduplication. Network issues can cause messages to arrive out of order or be delivered twice. Include sequence numbers or timestamps in your messages and handle deduplication on the client side.

When to Use (and When Not To)

Use WebSockets when: you need true bidirectional communication with low latency, such as chat applications, multiplayer games, collaborative document editing, or live trading platforms where both client and server send frequent messages.

Use SSE when: the primary data flow is server-to-client, such as live dashboards, notification streams, activity feeds, real-time search results, or log tailing. Combine with HTTP POST for occasional client-to-server messages.

Use polling when: updates are infrequent, you need maximum infrastructure compatibility, you are working with serverless functions that cannot maintain persistent connections, or the simplicity of stateless HTTP is more valuable than sub-second latency.

The most common mistake is over-engineering. A notification badge that updates every 30 seconds does not need WebSockets. Start with polling, graduate to SSE when you need real-time push, and reach for WebSockets only when bidirectional communication is a genuine requirement.

FAQ

What is the difference between WebSockets and Server-Sent Events?

WebSockets provide full-duplex bidirectional communication over a single TCP connection using the WebSocket protocol. Server-Sent Events provide a unidirectional channel where only the server can push data to the client, using standard HTTP. SSE includes automatic reconnection and event ID tracking built into the browser API, while WebSockets require you to implement these features manually.

Can Server-Sent Events replace WebSockets?

SSE can replace WebSockets when communication is primarily server-to-client. For use cases like live dashboards, notification feeds, or real-time analytics, SSE is often the better choice because it works with standard HTTP infrastructure and reconnects automatically. For bidirectional use cases like chat or collaborative editing, WebSockets remain necessary.

How do you scale WebSocket connections?

Scaling WebSockets requires a coordination layer such as Redis pub/sub to broadcast messages across multiple server instances. You also need connection-aware load balancing (sticky sessions or Layer 4 balancing) because each WebSocket connection is stateful and tied to a specific server process. This is significantly more complex than scaling stateless HTTP endpoints.

Do modern browsers support Server-Sent Events?

All modern browsers including Chrome, Firefox, Safari, and Edge support SSE natively through the EventSource API. Internet Explorer never supported SSE, but it has been retired by Microsoft. For environments that require broader compatibility, lightweight polyfills are available that implement the EventSource specification.

Is long polling still relevant in modern applications?

Long polling remains relevant in specific scenarios: when you need real-time updates but cannot use WebSockets or SSE due to infrastructure constraints, when working behind restrictive corporate proxies, or when using serverless platforms that do not support persistent connections. However, for most modern applications, SSE provides a better balance of simplicity and real-time capability.

Collaboration

Need help with a project?

Let's Build It

I help startups and established companies design, build, and scale world-class digital products. From deep technical architecture to pixel-perfect UI — let's bring your vision to life.

SH

Article Author

Sadam Hussain

Senior Full Stack Developer

Senior Full Stack Developer with over 7 years of experience building React, Next.js, Node.js, TypeScript, and AI-powered web platforms.

Related Articles

How to Design API Contracts Between Micro-Frontends and BFFs
Mar 21, 20266 min read
Micro-Frontends
BFF
API Design

How to Design API Contracts Between Micro-Frontends and BFFs

Learn how to design stable API contracts between Micro-Frontends and Backend-for-Frontend layers with versioning, ownership boundaries, error handling, and schema governance.

Next.js BFF Architecture
Mar 21, 20261 min read
Next.js
BFF
Architecture

Next.js BFF Architecture

An architectural deep dive into using Next.js as a Backend-for-Frontend, including route handlers, server components, auth boundaries, caching, and service orchestration.

Next.js Cache Components and PPR in Real Apps
Mar 21, 20266 min read
Next.js
Performance
Caching

Next.js Cache Components and PPR in Real Apps

A practical guide to using Next.js Cache Components and Partial Prerendering in real applications, with tradeoffs, cache strategy, and freshness considerations.