OpenTelemetry Security Observability Tutorial [2026]
Bottom Line
Treat security events as first-class telemetry. With structured auth-failure logs and spans, the OpenTelemetry Collector can redact secrets, route hot events, and turn them into counters fast enough for alerting.
Key Takeaways
- ›Use OTLP/HTTP on port 4318 to ship traces, metrics, and logs from one service
- ›Route
auth.failurelogs into a dedicated pipeline with the routing connector - ›Redact leaked headers or tokens in the Collector before data leaves the node
- ›Count failed logins with the count connector to drive near-real-time alerts
Real-time security observability works best when detection data is emitted by the application, not reconstructed hours later from fragmented logs. In this walkthrough, you’ll build a small OpenTelemetry pipeline that captures failed login attempts as traces, logs, and metrics; redacts sensitive fields in the Collector; and turns those events into a counter you can alert on immediately. The stack here is current for May 06, 2026 and uses official OpenTelemetry Go and Collector components.
Prerequisites
Before you start
- Go 1.23+, which matches the current OpenTelemetry Go getting-started guidance.
- Docker, so you can run the Collector locally without building a custom binary.
- A basic HTTP service you can modify. This tutorial uses a minimal Go login endpoint.
- The OpenTelemetry Collector Contrib image, because this setup uses the routing, count, and redaction components.
- One caution: OpenTelemetry Go logs are currently Beta, and the Collector routing connector is still alpha, so validate config changes in staging before promoting them.
Bottom Line
Emit security events as structured telemetry at the request boundary, then let the Collector redact, route, and aggregate them. That gives you faster detection without binding your application to a single SIEM or vendor pipeline.
Step 1: Start the Collector
The Collector is the control plane for this pattern. It receives OTLP data, masks risky attributes, routes high-value security logs into a dedicated pipeline, and emits a metric every time an auth failure appears. That metric becomes your alert primitive.
Create the Collector config
extensions:
health_check: {}
receivers:
otlp:
protocols:
http:
processors:
batch: {}
resource/security:
attributes:
- key: security.pipeline
value: realtime
action: upsert
redaction/security:
allow_all_keys: true
blocked_key_patterns:
- ".*header.*"
- ".*token.*"
- ".*password.*"
summary: debug
connectors:
routing/security:
default_pipelines: [logs/default]
table:
- context: log
condition: 'attributes["security.event"] == "auth.failure"'
pipelines: [logs/security]
count/security:
logs:
security_auth_failures_total:
description: Count failed authentication log records.
conditions:
- 'attributes["security.event"] == "auth.failure"'
exporters:
debug/default:
verbosity: basic
debug/security:
verbosity: detailed
service:
extensions: [health_check]
pipelines:
traces:
receivers: [otlp]
processors: [resource/security, batch]
exporters: [debug/default]
logs/in:
receivers: [otlp]
processors: [resource/security, redaction/security, batch]
exporters: [routing/security, count/security]
logs/default:
receivers: [routing/security]
exporters: [debug/default]
logs/security:
receivers: [routing/security]
exporters: [debug/security]
metrics/security:
receivers: [count/security]
exporters: [debug/default]Run the Collector
docker run --rm \
-p 4318:4318 \
-p 13133:13133 \
-v "$(pwd)/otelcol.yaml:/etc/otelcol-contrib/config.yaml" \
otel/opentelemetry-collector-contrib:0.150.0This uses the official Contrib image and the documented config path for Docker-based Collector runs. If you already have a central Collector, keep the same logical pipeline but move the exporters from debug to your real backend.
Step 2: Instrument the Go Service
Now wire a Go login service to emit traces, metrics, and logs over OTLP/HTTP. The official Go docs support this shape directly: traces via otlptracehttp, metrics via otlpmetrichttp, and logs via otlploghttp.
Install the dependencies
go get go.opentelemetry.io/contrib/bridges/otelslog \
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp \
go.opentelemetry.io/otel \
go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp \
go.opentelemetry.io/otel/exporters/otlp/otlplog/otlploghttp \
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp \
go.opentelemetry.io/otel/sdk/log \
go.opentelemetry.io/otel/sdk/metric \
go.opentelemetry.io/otel/sdk/resource \
go.opentelemetry.io/otel/sdk/traceAdd OpenTelemetry setup and the login handler
package main
import (
"context"
"errors"
"log/slog"
"net/http"
"time"
"go.opentelemetry.io/contrib/bridges/otelslog"
"go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp"
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/attribute"
"go.opentelemetry.io/otel/codes"
logglobal "go.opentelemetry.io/otel/log/global"
"go.opentelemetry.io/otel/metric"
"go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp"
"go.opentelemetry.io/otel/exporters/otlp/otlplog/otlploghttp"
"go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp"
sdklog "go.opentelemetry.io/otel/sdk/log"
sdkmetric "go.opentelemetry.io/otel/sdk/metric"
"go.opentelemetry.io/otel/sdk/resource"
sdktrace "go.opentelemetry.io/otel/sdk/trace"
)
var (
tracer = otel.Tracer("auth-api")
meter = otel.Meter("auth-api")
logger *slog.Logger
failedLogins metric.Int64Counter
)
func setupOTel(ctx context.Context) (func(context.Context) error, *sdklog.LoggerProvider, error) {
res := resource.NewWithAttributes("",
attribute.String("service.name", "auth-api"),
attribute.String("deployment.environment.name", "prod"),
)
traceExp, err := otlptracehttp.New(ctx,
otlptracehttp.WithEndpoint("localhost:4318"),
otlptracehttp.WithInsecure(),
)
if err != nil { return nil, nil, err }
metricExp, err := otlpmetrichttp.New(ctx,
otlpmetrichttp.WithEndpoint("localhost:4318"),
otlpmetrichttp.WithInsecure(),
)
if err != nil { return nil, nil, err }
logExp, err := otlploghttp.New(ctx,
otlploghttp.WithEndpoint("localhost:4318"),
otlploghttp.WithInsecure(),
)
if err != nil { return nil, nil, err }
tp := sdktrace.NewTracerProvider(
sdktrace.WithBatcher(traceExp),
sdktrace.WithResource(res),
)
mp := sdkmetric.NewMeterProvider(
sdkmetric.WithResource(res),
sdkmetric.WithReader(sdkmetric.NewPeriodicReader(metricExp, sdkmetric.WithInterval(5*time.Second))),
)
lp := sdklog.NewLoggerProvider(
sdklog.WithResource(res),
sdklog.WithProcessor(sdklog.NewBatchProcessor(logExp)),
)
otel.SetTracerProvider(tp)
otel.SetMeterProvider(mp)
logglobal.SetLoggerProvider(lp)
return func(ctx context.Context) error {
return errors.Join(tp.Shutdown(ctx), mp.Shutdown(ctx), lp.Shutdown(ctx))
}, lp, nil
}
func loginHandler(w http.ResponseWriter, r *http.Request) {
ctx, span := tracer.Start(r.Context(), "login.attempt")
defer span.End()
user := r.URL.Query().Get("user")
otp := r.Header.Get("X-OTP")
authHeader := r.Header.Get("Authorization")
if user != "alice" || otp != "246810" {
span.SetAttributes(
attribute.String("security.event", "auth.failure"),
attribute.String("enduser.id", user),
attribute.String("client.address", r.RemoteAddr),
attribute.Int("http.response.status_code", http.StatusUnauthorized),
)
span.SetStatus(codes.Error, "invalid_credentials")
failedLogins.Add(ctx, 1, metric.WithAttributes(
attribute.String("auth.factor", "password+otp"),
))
logger.WarnContext(ctx, "authentication failed",
slog.String("security.event", "auth.failure"),
slog.String("security.severity", "high"),
slog.String("enduser.id", user),
slog.String("client.address", r.RemoteAddr),
slog.String("auth.header", authHeader),
)
http.Error(w, "unauthorized", http.StatusUnauthorized)
return
}
logger.InfoContext(ctx, "authentication succeeded",
slog.String("security.event", "auth.success"),
slog.String("enduser.id", user),
)
w.WriteHeader(http.StatusOK)
_, _ = w.Write([]byte("ok"))
}
func main() {
ctx := context.Background()
shutdown, lp, err := setupOTel(ctx)
if err != nil { panic(err) }
defer func() { _ = shutdown(context.Background()) }()
failedLogins, err = meter.Int64Counter("security.auth.failures")
if err != nil { panic(err) }
logger = otelslog.NewLogger("auth-api", otelslog.WithLoggerProvider(lp), otelslog.WithSource(true))
mux := http.NewServeMux()
mux.HandleFunc("/login", loginHandler)
if err := http.ListenAndServe(":8080", otelhttp.NewHandler(mux, "auth-api")); err != nil {
panic(err)
}
}auth.header so you can prove the Collector redaction path works. In a real service, avoid emitting secrets at all; use Collector masking as a safety net, not your primary control.Step 3: Route, Redact, and Count Events
The important design choice is that you are not treating security telemetry as one generic log stream. You are classifying it at emission time and letting the Collector turn that classification into routing and metrics.
- Traces preserve request context and make it easy to pivot from a suspicious login to upstream latency, retries, or dependency errors.
- Logs capture the event itself with rich attributes such as
enduser.id,client.address, andsecurity.severity. - Metrics provide cheap, fast alert signals. The
countconnector turns repeated auth failures into a counter without changing app logic again. - Redaction happens centrally, which is critical when multiple teams and services emit telemetry inconsistently.
If you need realistic test payloads without exposing customer data, sanitize them first with TechBytes’ Data Masking Tool. That is especially useful when replaying login traffic through staging collectors to validate masking rules.
security.event=auth.failure is safer than trying to route on usernames, paths with IDs, or free-form messages.Step 4: Verify the Output
With the Collector running and the Go service started, send a failed login request. You want to confirm four things: the app returns 401, the trace is emitted, the log is routed to the security pipeline, and the auth header gets masked before export.
go run .
curl -i \
-H 'Authorization: Bearer demo-secret-token' \
-H 'X-OTP: wrong' \
'http://localhost:8080/login?user=bob'Expected result
- The HTTP response should be
401 Unauthorized. - The Collector should print a trace batch for the failed request.
- The detailed security log output should include
security.event=auth.failure. - The sensitive header should be masked or summarized by the redaction processor, not passed through intact.
- The metrics pipeline should emit
security_auth_failures_totalwith a value of1.
Logs: security.event=auth.failure
Logs: auth.header=****
Metrics: security_auth_failures_total = 1You can also verify Collector liveness through the health extension:
curl http://localhost:13133/If that endpoint is healthy but you still see no telemetry, the problem is usually endpoint mismatch, missing log provider initialization, or a routing condition that never matches.
Troubleshooting and What's Next
Troubleshooting top 3
- No logs arrive, but traces do. In Go, traces and logs use separate SDK setup. If you forget to create a
LoggerProviderand register it globally, the log path becomes a no-op even while traces look healthy. - The routing connector never sends records to the security pipeline. Check the exact attribute key and value on the emitted log record. If your app emits
security.event=auth_failurebut the Collector condition expectsauth.failure, nothing matches. - Secrets still appear in exported data. The redaction processor only works on fields that reach the Collector. If a secret is embedded inside an unstructured message body, convert that message to structured attributes first, then block the risky keys or values.
What's next
- Replace the
debugexporters with your SIEM, data lake, or managed observability backend once the local behavior is correct. - Add more security events, such as password resets, MFA challenges, privilege changes, and API key failures, using the same
security.eventpattern. - Layer alerting on top of
security_auth_failures_total, or export it to a metrics backend for rate-based detections. - Review the official docs for deeper production hardening: OpenTelemetry Go instrumentation, Collector configuration, transforming telemetry, and Collector installation.
The key architectural move is simple: security-relevant behavior must be emitted as telemetry at the moment it happens. Once you do that, OpenTelemetry stops being just observability plumbing and becomes a practical detection substrate.
Frequently Asked Questions
Can OpenTelemetry replace a SIEM for security monitoring? +
Should security events be logs, spans, or metrics in OpenTelemetry? +
How do I redact tokens before OpenTelemetry exports them? +
blocked_key_patterns or blocked_values for tokens, headers, and passwords so accidental leaks are masked before they leave the Collector.Why are my OpenTelemetry logs not showing up in the Collector? +
LoggerProvider, a log exporter such as otlploghttp, and either a bridge like otelslog or another supported log bridge. If traces work but logs do not, that missing provider setup is usually the cause.Get Engineering Deep-Dives in Your Inbox
Weekly breakdowns of architecture, security, and developer tooling — no fluff.