Blog/Tutorials & Step-by-Step/OpenTelemetry for Next.js and Node.js
POST
March 21, 2026
LAST UPDATEDMarch 21, 2026

OpenTelemetry for Next.js and Node.js

A practical implementation guide for adding OpenTelemetry to Next.js and Node.js apps, including traces, request flow visibility, and production diagnostics.

Tags

OpenTelemetryNext.jsNode.jsObservabilityMonitoring
OpenTelemetry for Next.js and Node.js
5 min read

OpenTelemetry for Next.js and Node.js

TL;DR

In this tutorial, you will instrument a Next.js app and a Node.js service with OpenTelemetry so you can trace requests across both layers, improve debugging, and understand where latency and failures actually happen. The real benefit comes from correlating the frontend-adjacent server layer and backend services, not instrumenting one side in isolation.

Prerequisites

Before starting, you should have:

  • a Next.js application using the App Router or route handlers
  • a Node.js service or API you can instrument separately
  • a destination for telemetry, such as an OTLP-compatible collector
  • basic familiarity with environment variables and server startup flow

You do not need a large microservices system for this to be useful. Even a Next.js app calling one backend API benefits from shared traces.

Step 1: Define the Visibility Goal

The reason to instrument both Next.js and Node.js is to answer end-to-end questions like:

  • did the slowdown happen in the page layer or the backend?
  • which upstream dependency is causing the issue?
  • where are errors introduced in the request path?
  • which route is responsible for a noisy backend workload?

If you only instrument the backend, you miss request-entry context. If you only instrument Next.js, you miss service-level behavior. Together, they become much more useful.

Step 2: Add OpenTelemetry to the Node.js Service

Start with the backend service first. This gives you a stable server-side foundation.

bash
npm install @opentelemetry/sdk-node \
  @opentelemetry/auto-instrumentations-node \
  @opentelemetry/exporter-trace-otlp-http \
  @opentelemetry/resources \
  @opentelemetry/semantic-conventions

Then create the instrumentation bootstrap:

ts
// instrumentation.ts
import { NodeSDK } from "@opentelemetry/sdk-node";
import { getNodeAutoInstrumentations } from "@opentelemetry/auto-instrumentations-node";
import { OTLPTraceExporter } from "@opentelemetry/exporter-trace-otlp-http";
 
const traceExporter = new OTLPTraceExporter({
  url: process.env.OTEL_EXPORTER_OTLP_TRACES_ENDPOINT,
});
 
export const sdk = new NodeSDK({
  traceExporter,
  instrumentations: [getNodeAutoInstrumentations()],
});

And start it before the server boots:

ts
// server.ts
import { sdk } from "./instrumentation";
import { createServer } from "./app";
 
async function bootstrap() {
  await sdk.start();
  const app = await createServer();
  app.listen(3001);
}
 
bootstrap();

Step 3: Name the Service Clearly

If you run multiple layers, naming discipline matters.

ts
import { Resource } from "@opentelemetry/resources";
import { SemanticResourceAttributes } from "@opentelemetry/semantic-conventions";
 
const resource = new Resource({
  [SemanticResourceAttributes.SERVICE_NAME]: "orders-api",
  [SemanticResourceAttributes.SERVICE_VERSION]: "1.0.0",
  [SemanticResourceAttributes.DEPLOYMENT_ENVIRONMENT]:
    process.env.NODE_ENV ?? "development",
});

If your services do not identify themselves consistently, trace data becomes hard to navigate very quickly.

Step 4: Instrument Next.js as a Separate Service Boundary

Your Next.js application should be treated as its own service boundary, not just a UI wrapper.

Why:

  • route handlers do real server work
  • server components can trigger expensive upstream calls
  • auth, caching, and orchestration often live here

That means tracing Next.js matters for production behavior, not just for framework curiosity.

A useful model is to treat:

  • Next.js as the frontend-adjacent orchestration layer
  • Node.js APIs as backend service layers

Then you can see the full request chain clearly.

Step 5: Propagate Context Across the Boundary

The real payoff comes when the trace continues from Next.js into the backend service.

For example:

  1. user requests a page
  2. Next.js route or server component fetches backend data
  3. the backend continues the same trace context
  4. you see the full request chain in one view

That makes latency analysis much easier because you can tell:

  • whether the delay happened before the backend call
  • inside the backend
  • or in a downstream dependency the backend called

Step 6: Add Manual Spans Where Domain Work Matters

Auto-instrumentation gives you transport visibility. Manual spans give you domain visibility.

In the backend:

ts
import { trace } from "@opentelemetry/api";
 
const tracer = trace.getTracer("orders-api");
 
export async function processOrder(orderId: string) {
  return tracer.startActiveSpan("processOrder", async (span) => {
    span.setAttribute("order.id", orderId);
 
    try {
      await validateOrder(orderId);
      await reserveInventory(orderId);
      await capturePayment(orderId);
      span.setStatus({ code: 1 });
    } catch (error) {
      span.recordException(error as Error);
      span.setStatus({ code: 2 });
      throw error;
    } finally {
      span.end();
    }
  });
}

In Next.js, useful manual spans often surround:

  • auth resolution
  • route-level orchestration
  • expensive aggregation calls
  • cache invalidation or mutation flows

Step 7: Correlate Logs with Trace Context

Traces tell you where time went. Logs tell you what happened.

The strongest setup links them through shared request context:

  • trace ID
  • span ID
  • route name
  • tenant or workspace identifier where appropriate

That turns debugging from "search through logs and guess" into a more structured process.

Step 8: Be Intentional About Metrics

Metrics should answer operational questions, not just exist because the tooling supports them.

Good starter metrics:

  • route latency
  • backend request duration
  • error rate
  • dependency call duration
  • request volume

These are enough to surface many real-world problems without drowning the system in noise.

Step 9: Avoid Leaking Sensitive Data

Telemetry pipelines can become accidental data exfiltration paths if you are careless.

Avoid sending:

  • raw authorization headers
  • personal data fields
  • payment payloads
  • sensitive request bodies
  • secrets in query params or logs

Observability is valuable, but it must respect privacy and security boundaries.

A Practical Full-Stack Mental Model

A useful production setup looks like this:

  1. Next.js receives the request
  2. Next.js route or server component starts the trace
  3. Next.js calls the backend with propagated context
  4. Node.js service continues the trace
  5. database or external API spans appear underneath
  6. logs are correlated back to the same trace IDs

That gives you one end-to-end debugging path instead of fragmented visibility.

Common Mistakes

Instrumenting Only the Backend

This misses orchestration and latency that occurs in the application layer before the backend service is ever called.

Instrumenting Only Next.js

This leaves you blind once the request crosses into the deeper backend path.

Relying Only on Auto-Instrumentation

Auto-instrumentation is useful, but it does not understand your domain boundaries or key business workflows.

Ignoring Naming and Context Standards

If services, spans, and attributes are inconsistent, the telemetry data becomes much harder to use under pressure.

Next Steps

After the baseline is working, the next improvements usually are:

  • adding domain-specific metrics
  • tracing background jobs and queue processing
  • connecting more services into the same trace chain
  • defining alerts around latency and error budgets
  • building service maps for critical product flows

That is when OpenTelemetry starts becoming part of how the team operates, not just something installed in the repo.

FAQ

Can you use OpenTelemetry with Next.js?

Yes. Next.js apps can be instrumented with OpenTelemetry to trace route handlers, server operations, and request paths, especially when paired with backend services.

Why instrument both Next.js and Node.js services?

Instrumenting both layers helps you understand end-to-end latency, service boundaries, and where failures or slowdowns actually occur.

Is OpenTelemetry only for large systems?

No. Even mid-sized systems benefit from better request visibility, especially once multiple services, queues, or third-party integrations are involved.

Collaboration

Need help with a project?

Let's Build It

I help startups and established companies design, build, and scale world-class digital products. From deep technical architecture to pixel-perfect UI — let's bring your vision to life.

SH

Article Author

Sadam Hussain

Senior Full Stack Developer

Senior Full Stack Developer with over 7 years of experience building React, Next.js, Node.js, TypeScript, and AI-powered web platforms.

Related Articles

How to Add Observability to a Node.js App with OpenTelemetry
Mar 21, 20265 min read
Node.js
OpenTelemetry
Observability

How to Add Observability to a Node.js App with OpenTelemetry

Learn how to instrument a Node.js app with OpenTelemetry for traces, metrics, and logs, and build a practical observability setup for production debugging.

How to Build a Backend-for-Frontend (BFF) with Next.js and Node.js
Mar 21, 20266 min read
Next.js
Node.js
BFF

How to Build a Backend-for-Frontend (BFF) with Next.js and Node.js

A practical guide to building a Backend-for-Frontend with Next.js and Node.js for API aggregation, auth handling, caching, and frontend-specific data shaping.

How I Structure CI/CD for Next.js, Docker, and GitHub Actions
Mar 21, 20265 min read
CI/CD
Next.js
Docker

How I Structure CI/CD for Next.js, Docker, and GitHub Actions

A practical CI/CD blueprint for Next.js apps using Docker and GitHub Actions, including testing, image builds, deployment stages, cache strategy, and release safety.