Fix: OpenTelemetry Not Working — Traces Not Appearing, Spans Missing, or Exporter Connection Refused
Quick Answer
How to fix OpenTelemetry issues — SDK initialization order, auto-instrumentation setup, OTLP exporter configuration, context propagation, and missing spans in Node.js, Python, and Java.
The Problem
OpenTelemetry is set up but no traces appear in Jaeger/Tempo/Datadog:
// index.js
import { NodeSDK } from '@opentelemetry/sdk-node';
const sdk = new NodeSDK({ /* config */ });
sdk.start();
// app runs, requests are made — but trace UI shows nothingOr spans are created but HTTP calls aren’t traced:
const tracer = trace.getTracer('my-service');
const span = tracer.startSpan('my-operation');
// Manual spans appear, but auto-instrumented HTTP/DB spans are missingOr the exporter throws ECONNREFUSED:
@opentelemetry/sdk-node - ERROR - Error: connect ECONNREFUSED 127.0.0.1:4317Or traces are visible but context doesn’t propagate across services:
Service A calls Service B — but B's spans aren't linked to A's traceWhy This Happens
OpenTelemetry has strict initialization requirements:
- SDK must start before importing instrumented libraries — if you
import expressbefore starting the OpenTelemetry SDK, Express is already loaded without instrumentation hooks. The SDK monkey-patches modules at startup; modules loaded before the SDK miss the patches. - Auto-instrumentation packages must be installed separately — the
@opentelemetry/sdk-nodedoesn’t include auto-instrumentation for Express, HTTP, or databases. You must install@opentelemetry/auto-instrumentations-nodeor specific packages like@opentelemetry/instrumentation-express. - Exporter endpoint must be correct — the default OTLP gRPC endpoint is
localhost:4317, HTTP/protobuf islocalhost:4318. Mismatch between configured protocol and the collector’s listener causes connection errors. - Context propagation requires W3C Trace Context headers — for distributed tracing to work, the calling service must inject
traceparentheaders into outgoing requests, and the receiving service must extract them. Both require proper propagator configuration.
Fix 1: Initialize SDK Before Everything Else
The SDK must be the very first thing that runs:
// WRONG — Express loaded before SDK
import express from 'express'; // Express already loaded — won't be instrumented
import { NodeSDK } from '@opentelemetry/sdk-node';
const sdk = new NodeSDK({ /* ... */ });
sdk.start();
const app = express(); // Too late
// CORRECT — SDK starts in a separate file, loaded first
// otel.js — SDK initialization ONLY
import { NodeSDK } from '@opentelemetry/sdk-node';
import { getNodeAutoInstrumentations } from '@opentelemetry/auto-instrumentations-node';
import { OTLPTraceExporter } from '@opentelemetry/exporter-trace-otlp-http';
import { Resource } from '@opentelemetry/resources';
import { SEMRESATTRS_SERVICE_NAME, SEMRESATTRS_SERVICE_VERSION } from '@opentelemetry/semantic-conventions';
const sdk = new NodeSDK({
resource: new Resource({
[SEMRESATTRS_SERVICE_NAME]: 'my-service',
[SEMRESATTRS_SERVICE_VERSION]: '1.0.0',
}),
traceExporter: new OTLPTraceExporter({
url: 'http://localhost:4318/v1/traces',
}),
instrumentations: [
getNodeAutoInstrumentations({
'@opentelemetry/instrumentation-fs': { enabled: false }, // Disable noisy fs traces
}),
],
});
sdk.start();
// Graceful shutdown
process.on('SIGTERM', () => sdk.shutdown().finally(() => process.exit(0)));// package.json — use --require to load otel.js first
{
"scripts": {
"start": "node --require ./otel.js src/index.js",
"dev": "nodemon --require ./otel.js src/index.ts"
}
}TypeScript with ts-node:
# Load otel.ts before anything else
node --require ts-node/register --require ./otel.ts src/index.ts
# Or with environment variable
OTEL_NODE_RESOURCE_DETECTORS=env,host NODE_OPTIONS="--require ./otel.js" node src/index.jsFix 2: Install the Right Packages
Auto-instrumentation requires specific packages:
# Core SDK
npm install @opentelemetry/sdk-node @opentelemetry/api
# All-in-one auto-instrumentation (recommended for getting started)
npm install @opentelemetry/auto-instrumentations-node
# Or install specific instrumentations you need
npm install \
@opentelemetry/instrumentation-http \
@opentelemetry/instrumentation-express \
@opentelemetry/instrumentation-pg \ # PostgreSQL
@opentelemetry/instrumentation-redis-4 \ # Redis
@opentelemetry/instrumentation-mongoose # MongoDB
# OTLP exporter (gRPC — port 4317)
npm install @opentelemetry/exporter-trace-otlp-grpc
# OTLP exporter (HTTP/protobuf — port 4318)
npm install @opentelemetry/exporter-trace-otlp-http
# Console exporter (for debugging — prints to stdout)
npm install @opentelemetry/sdk-trace-node
# The ConsoleSpanExporter is included in sdk-trace-nodeComplete working setup:
// otel.ts
import { NodeSDK } from '@opentelemetry/sdk-node';
import { getNodeAutoInstrumentations } from '@opentelemetry/auto-instrumentations-node';
import { OTLPTraceExporter } from '@opentelemetry/exporter-trace-otlp-http';
import { ConsoleSpanExporter, SimpleSpanProcessor } from '@opentelemetry/sdk-trace-node';
import { Resource } from '@opentelemetry/resources';
import { SEMRESATTRS_SERVICE_NAME } from '@opentelemetry/semantic-conventions';
const isDev = process.env.NODE_ENV !== 'production';
const sdk = new NodeSDK({
resource: new Resource({
[SEMRESATTRS_SERVICE_NAME]: process.env.OTEL_SERVICE_NAME || 'my-service',
}),
traceExporter: isDev
? new ConsoleSpanExporter() // Print spans to console during development
: new OTLPTraceExporter({
url: process.env.OTEL_EXPORTER_OTLP_ENDPOINT || 'http://localhost:4318/v1/traces',
headers: {
// Add auth headers if required by your collector
authorization: `Bearer ${process.env.OTEL_EXPORTER_API_KEY}`,
},
}),
instrumentations: [getNodeAutoInstrumentations()],
});
sdk.start();
console.log('OpenTelemetry initialized');Fix 3: Fix Exporter Connection Issues
Match the exporter protocol to your collector’s listener:
// gRPC exporter — default port 4317
import { OTLPTraceExporter } from '@opentelemetry/exporter-trace-otlp-grpc';
const grpcExporter = new OTLPTraceExporter({
url: 'http://localhost:4317', // No path for gRPC
});
// HTTP/protobuf exporter — default port 4318
import { OTLPTraceExporter } from '@opentelemetry/exporter-trace-otlp-http';
const httpExporter = new OTLPTraceExporter({
url: 'http://localhost:4318/v1/traces', // Path required for HTTP
});
// Verify the collector is running
// docker run -p 4317:4317 -p 4318:4318 otel/opentelemetry-collector-contribEnvironment variable configuration (recommended for production):
# Set via environment variables — no code changes needed
export OTEL_SERVICE_NAME=my-service
export OTEL_EXPORTER_OTLP_ENDPOINT=http://otel-collector:4318
export OTEL_EXPORTER_OTLP_PROTOCOL=http/protobuf # or grpc
export OTEL_TRACES_SAMPLER=parentbased_traceidratio
export OTEL_TRACES_SAMPLER_ARG=0.1 # Sample 10% of traces
# For Datadog
export DD_SITE=datadoghq.com
export DD_API_KEY=your-api-key
export OTEL_EXPORTER_OTLP_ENDPOINT=https://trace.agent.datadoghq.com
# For Grafana Cloud
export OTEL_EXPORTER_OTLP_ENDPOINT=https://otlp-gateway-prod-us-central-0.grafana.net/otlp
export OTEL_EXPORTER_OTLP_HEADERS=Authorization=Basic base64(instanceID:apiKey)Local collector config (otel-collector-config.yaml):
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
exporters:
debug:
verbosity: detailed # Log all received spans
jaeger:
endpoint: jaeger:14250
tls:
insecure: true
service:
pipelines:
traces:
receivers: [otlp]
exporters: [debug, jaeger]Fix 4: Create Manual Spans
For operations not covered by auto-instrumentation, add manual spans:
import { trace, context, SpanStatusCode, SpanKind } from '@opentelemetry/api';
const tracer = trace.getTracer('my-service', '1.0.0');
// Basic span
async function processOrder(orderId: string) {
const span = tracer.startSpan('processOrder');
try {
span.setAttribute('order.id', orderId);
span.setAttribute('order.source', 'api');
const order = await fetchOrder(orderId);
span.setAttribute('order.total', order.total);
await chargePayment(order);
await sendConfirmation(order);
span.setStatus({ code: SpanStatusCode.OK });
return order;
} catch (err) {
span.setStatus({
code: SpanStatusCode.ERROR,
message: err instanceof Error ? err.message : 'Unknown error',
});
span.recordException(err as Error);
throw err;
} finally {
span.end(); // Always end the span
}
}
// With context propagation (parent-child relationship)
async function handleRequest(req: Request) {
return tracer.startActiveSpan('handleRequest', async (span) => {
try {
span.setAttribute('http.method', req.method);
span.setAttribute('http.url', req.url);
// Child spans automatically become children of 'handleRequest'
const user = await tracer.startActiveSpan('authenticate', async (authSpan) => {
const user = await verifyToken(req.headers.get('authorization'));
authSpan.setAttribute('user.id', user.id);
authSpan.end();
return user;
});
const result = await processData(user);
span.setStatus({ code: SpanStatusCode.OK });
return result;
} finally {
span.end();
}
});
}
// Add span events (point-in-time annotations within a span)
span.addEvent('cache_miss', { 'cache.key': cacheKey });
span.addEvent('retry_attempt', { 'retry.count': retryCount });Fix 5: Context Propagation for Distributed Tracing
Distributed tracing requires passing trace context between services:
// Sender — inject trace context into outgoing HTTP requests
import { context, propagation } from '@opentelemetry/api';
async function callDownstreamService(url: string, body: object) {
const headers: Record<string, string> = {
'content-type': 'application/json',
};
// Inject current trace context into headers (adds traceparent, tracestate)
propagation.inject(context.active(), headers);
const response = await fetch(url, {
method: 'POST',
headers,
body: JSON.stringify(body),
});
return response.json();
}
// Receiver — extract trace context from incoming request headers
import { propagation, context, trace } from '@opentelemetry/api';
function extractContext(headers: Record<string, string>) {
return propagation.extract(context.active(), headers);
}
// Express middleware — auto-instrumentation handles this automatically
// But for manual setup:
app.use((req, res, next) => {
const extractedContext = propagation.extract(
context.active(),
req.headers as Record<string, string>
);
context.with(extractedContext, () => {
next();
});
});W3C Trace Context headers:
traceparent: 00-0af7651916cd43dd8448eb211c80319c-b9c7c989f97918e1-01
version-traceId-parentSpanId-flags
tracestate: vendor1=value1,vendor2=value2Configure propagators:
import { W3CTraceContextPropagator } from '@opentelemetry/core';
import { B3Propagator } from '@opentelemetry/propagator-b3'; // For Zipkin compat
const sdk = new NodeSDK({
// W3C is the default and recommended propagator
textMapPropagator: new W3CTraceContextPropagator(),
// Or for compatibility with Zipkin/older systems:
// textMapPropagator: new B3Propagator(),
// ...
});Fix 6: Python and Java Setup
Python (opentelemetry-python):
# Install
# pip install opentelemetry-sdk opentelemetry-exporter-otlp \
# opentelemetry-instrumentation-fastapi \
# opentelemetry-instrumentation-requests \
# opentelemetry-instrumentation-sqlalchemy
# otel_setup.py — import this before your app
from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.resources import Resource, SERVICE_NAME
from opentelemetry.instrumentation.fastapi import FastAPIInstrumentor
from opentelemetry.instrumentation.requests import RequestsInstrumentor
from opentelemetry.instrumentation.sqlalchemy import SQLAlchemyInstrumentor
resource = Resource.create({SERVICE_NAME: "my-python-service"})
provider = TracerProvider(resource=resource)
exporter = OTLPSpanExporter(endpoint="http://localhost:4318/v1/traces")
provider.add_span_processor(BatchSpanProcessor(exporter))
trace.set_tracer_provider(provider)
# Auto-instrument frameworks
FastAPIInstrumentor.instrument()
RequestsInstrumentor.instrument()
# main.py
from otel_setup import * # Import before FastAPI
from fastapi import FastAPI
app = FastAPI()
@app.get("/users/{user_id}")
async def get_user(user_id: str):
tracer = trace.get_tracer(__name__)
with tracer.start_as_current_span("get_user") as span:
span.set_attribute("user.id", user_id)
return {"id": user_id, "name": "Alice"}Java (Spring Boot with OpenTelemetry Java agent):
# Download the Java agent (no code changes required)
wget https://github.com/open-telemetry/opentelemetry-java-instrumentation/releases/latest/download/opentelemetry-javaagent.jar
# Run with agent — auto-instruments Spring Boot, JDBC, HTTP clients, etc.
java \
-javaagent:opentelemetry-javaagent.jar \
-Dotel.service.name=my-spring-service \
-Dotel.exporter.otlp.endpoint=http://localhost:4318 \
-Dotel.exporter.otlp.protocol=http/protobuf \
-jar my-service.jarStill Not Working?
Console exporter shows spans but Jaeger shows nothing — the spans are being created and exported, but the connection to Jaeger is failing silently. Check that Jaeger is accepting OTLP (not just its native Thrift protocol): use otel/opentelemetry-collector as an intermediary, or start Jaeger with the OTLP receiver enabled (--collector.otlp.enabled=true).
Span sampling drops too many traces — the default sampler in production setups often uses probability sampling. If you’re debugging and need to see all traces, set OTEL_TRACES_SAMPLER=always_on temporarily. For production, use parentbased_traceidratio with a ratio appropriate for your traffic volume.
BatchSpanProcessor delays vs SimpleSpanProcessor — BatchSpanProcessor (default) buffers spans and exports in batches, introducing up to 5 second delays before spans appear. Use SimpleSpanProcessor in development for immediate export. In production, BatchSpanProcessor is required to avoid overwhelming the collector.
Missing database spans despite SQLAlchemy/pg instrumentation — verify the instrumentation version matches your library version. @opentelemetry/instrumentation-pg v0.40+ requires pg v8.x. Check the package’s README for version compatibility.
For related observability issues, see Fix: AWS CloudWatch Logs Not Appearing and Fix: GitHub Actions Process Completed with Exit Code 1.
Solo developer based in Japan. Every solution is cross-referenced with official documentation and tested before publishing.
Was this article helpful?
Related Articles
Fix: Fastify Not Working — 404, Plugin Encapsulation, and Schema Validation Errors
How to fix Fastify issues — route 404 from plugin encapsulation, reply already sent, FST_ERR_VALIDATION, request.body undefined, @fastify/cors, hooks not running, and TypeScript type inference.
Fix: Kafka Consumer Not Receiving Messages, Connection Refused, and Rebalancing Errors
How to fix Apache Kafka issues — consumer not receiving messages, auto.offset.reset, Docker advertised.listeners, max.poll.interval.ms rebalancing, MessageSizeTooLargeException, and KafkaJS errors.
Fix: Python Packaging Not Working — Build Fails, Package Not Found After Install, or PyPI Upload Errors
How to fix Python packaging issues — pyproject.toml setup, build backends (setuptools/hatchling/flit), wheel vs sdist, editable installs, package discovery, and twine upload to PyPI.
Fix: AWS SQS Not Working — Messages Not Received, Duplicate Processing, or DLQ Filling Up
How to fix AWS SQS issues — visibility timeout, message not delivered, duplicate messages, Dead Letter Queue configuration, FIFO queue ordering, and Lambda trigger problems.