OpenTelemetry in Node.js: Microservice Setup [2026]
OpenTelemetry is the fastest way to make a Node.js microservice observable without locking your team into one vendor. For a modern service, the baseline is simple: emit distributed traces, emit metrics, tag them with a stable service.name, and ship everything over OTLP to a Collector.
As of April 06, 2026, OpenTelemetry JavaScript documents traces and metrics as stable, while JavaScript logs are still evolving. That makes traces plus metrics the right starting point for a production Node.js microservice. In this walkthrough, you will wire an Express service to a local OpenTelemetry Collector, verify the output, and add one manual business span so your traces say something useful.
Key takeaway
For most Node.js services, the shortest reliable path is NodeSDK plus getNodeAutoInstrumentations() plus an OTLP exporter pointed at a local Collector. Auto-instrumentation gives you HTTP spans immediately; manual spans add the domain context your dashboards actually need.
Prerequisites
- Node.js 20+ installed locally. OpenTelemetry JS supports active or maintenance LTS Node releases, and the --import flow is cleanest on Node 20+.
- Docker available so you can run a local OpenTelemetry Collector.
- A small HTTP service, or willingness to create one from scratch.
- Basic familiarity with Express and npm.
Reference docs used for this setup: OpenTelemetry Node.js getting started, JavaScript exporters, and the Collector quick start. If you want to clean up the YAML or JS examples before sharing them internally, TechBytes' Code Formatter is a quick way to normalize indentation.
1. Run a local Collector
Do not point a brand-new service directly at a vendor backend during initial setup. Put a Collector in the middle first. It gives you a stable ingestion endpoint, lets you inspect payload flow locally, and keeps your app config portable.
Create otel-collector.yaml:
receivers:
otlp:
protocols:
http:
endpoint: 0.0.0.0:4318
processors:
batch:
exporters:
debug:
verbosity: detailed
service:
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [debug]
metrics:
receivers: [otlp]
processors: [batch]
exporters: [debug]
Now start the Collector:
docker run --rm \
-p 4318:4318 \
-v $(pwd)/otel-collector.yaml:/etc/otelcol/config.yaml \
otel/opentelemetry-collector:0.147.0
This uses OTLP/HTTP on port 4318, which matches the OpenTelemetry OTLP exporter defaults for HTTP-based pipelines.
2. Install the Node SDK
For a microservice, start with auto-instrumentation. It captures inbound HTTP spans, Express middleware timing, and several common library integrations before you write custom telemetry code.
npm init -y
npm install express
npm install @opentelemetry/api \
@opentelemetry/sdk-node \
@opentelemetry/auto-instrumentations-node \
@opentelemetry/exporter-trace-otlp-http \
@opentelemetry/exporter-metrics-otlp-http \
@opentelemetry/sdk-metrics \
@opentelemetry/resources \
@opentelemetry/semantic-conventions
Create a minimal service in app.mjs:
import express from 'express';
import { trace } from '@opentelemetry/api';
const app = express();
const tracer = trace.getTracer('orders-service');
app.get('/health', (_req, res) => {
res.json({ ok: true });
});
app.get('/orders/:id', async (req, res) => {
const order = await tracer.startActiveSpan('inventory.lookup', async (span) => {
span.setAttribute('order.id', req.params.id);
await new Promise((resolve) => setTimeout(resolve, 75));
span.end();
return { id: req.params.id, status: 'processing' };
});
res.json(order);
});
app.listen(3000, () => {
console.log('orders-service listening on http://localhost:3000');
});
3. Bootstrap OpenTelemetry before app startup
The most important setup rule is execution order: OpenTelemetry must initialize before your app imports the libraries you want instrumented. In Node, that means loading the SDK with --import.
Create instrumentation.mjs:
import { NodeSDK } from '@opentelemetry/sdk-node';
import { resourceFromAttributes } from '@opentelemetry/resources';
import { ATTR_SERVICE_NAME, ATTR_SERVICE_VERSION } from '@opentelemetry/semantic-conventions';
import { getNodeAutoInstrumentations } from '@opentelemetry/auto-instrumentations-node';
import { OTLPTraceExporter } from '@opentelemetry/exporter-trace-otlp-http';
import { OTLPMetricExporter } from '@opentelemetry/exporter-metrics-otlp-http';
import { PeriodicExportingMetricReader } from '@opentelemetry/sdk-metrics';
const sdk = new NodeSDK({
resource: resourceFromAttributes({
[ATTR_SERVICE_NAME]: 'orders-service',
[ATTR_SERVICE_VERSION]: '1.0.0'
}),
traceExporter: new OTLPTraceExporter({
url: 'http://localhost:4318/v1/traces'
}),
metricReader: new PeriodicExportingMetricReader({
exporter: new OTLPMetricExporter({
url: 'http://localhost:4318/v1/metrics'
})
}),
instrumentations: [getNodeAutoInstrumentations()]
});
sdk.start();
Then run the service like this:
node --import ./instrumentation.mjs ./app.mjs
This is enough to emit server-side spans and runtime metrics. The explicit resource block matters because if you skip service.name, downstream tools often group your service under an unhelpful default such as unknown_service.
4. Add a manual span where the business logic lives
Auto-instrumentation is necessary, but it is rarely sufficient. It tells you that GET /orders/:id took 92 ms. It does not tell you whether the slow part was inventory lookup, pricing, fraud rules, or a downstream queue publish.
That is why the example route wraps the simulated lookup in tracer.startActiveSpan(). In real services, use manual spans for:
- Cache reads and misses
- Database query groups
- External API calls that matter to latency
- Workflow boundaries such as validation, enrichment, and fulfillment
Keep attributes low-cardinality unless you know why you are breaking that rule. IDs like order.id are acceptable for a small tutorial, but in production you usually avoid indexing raw identifiers broadly. If you are capturing customer-adjacent fields during instrumentation, scrub them first; a utility like the TechBytes Data Masking Tool is useful when reviewing sample payloads for documentation or demos.
Verification and expected output
With the Collector running and the Node service started, send a few requests:
curl http://localhost:3000/health
curl http://localhost:3000/orders/42
curl http://localhost:3000/orders/99
In the service terminal, you should still see the normal application log:
orders-service listening on http://localhost:3000
In the Collector terminal, you should see trace data printed by the debug exporter. Expect spans that resemble:
Resource attributes:
- service.name: orders-service
- service.version: 1.0.0
Span #0
Name: GET /orders/:id
Span #1
Name: inventory.lookup
You should also see metric exports on an interval. The exact metric set depends on installed instrumentations and runtime, but HTTP server and process/runtime metrics are the normal result. The important validation points are:
- The Collector receives both traces and metrics.
- The resource contains service.name=orders-service.
- A request to /orders/:id produces both the automatic HTTP span and the manual inventory.lookup span.
Troubleshooting: top 3 problems
- No spans appear at all. Usually the SDK loaded too late. Make sure you start the app with node --import ./instrumentation.mjs ./app.mjs and not plain node app.mjs. Also check that no conflicting NODE_OPTIONS are injecting another OpenTelemetry loader.
- Telemetry exports fail or never reach the Collector. Verify the endpoint and protocol. For this tutorial the app uses OTLP/HTTP to http://localhost:4318/v1/traces and /v1/metrics. If you accidentally point to port 4317, you are targeting gRPC, not HTTP.
- Spans exist, but service identity is wrong. Set service.name explicitly in code or with OTEL_SERVICE_NAME. If you skip it, cross-service views become noisy fast, especially once you have multiple local services exporting to the same Collector.
What's next
Once the local path works, the next upgrade is architectural rather than syntactic. Keep the application code mostly unchanged and move the complexity into the Collector:
- Swap the debug exporter for Jaeger, Prometheus, or your vendor backend.
- Add processors for batching, filtering, and attribute enrichment.
- Introduce trace sampling policies centrally instead of baking them into each service.
- Standardize resource attributes like deployment.environment.name, service.namespace, and version metadata across every microservice.
From there, add manual spans to the two or three places that define user-visible latency. That is where OpenTelemetry stops being a checkbox and starts becoming an engineering tool.
Get Engineering Deep-Dives in Your Inbox
Weekly breakdowns of architecture, security, and developer tooling — no fluff.