Home Posts eBPF for Kubernetes v1.34+: Real-Time Security Guide
Security Deep-Dive

eBPF for Kubernetes v1.34+: Real-Time Security Guide

eBPF for Kubernetes v1.34+: Real-Time Security Guide
Dillip Chowdary
Dillip Chowdary
Tech Entrepreneur & Innovator · April 17, 2026 · 10 min read

As of April 17, 2026, Kubernetes 1.34 is still actively supported, and the current Cilium compatibility matrix explicitly lists Kubernetes 1.34 as tested. That makes it a solid target for teams that want Kubernetes-native APIs with an eBPF data plane underneath.

This tutorial shows a practical pattern: use Cilium as the eBPF-powered CNI, apply a strict namespace default-deny posture, allow only the traffic you want, and verify every decision with Hubble flow telemetry. The result is real-time network security that is easier to audit than ad hoc iptables rules and far more observable than blind allowlists.

Key takeaway

The winning pattern in Kubernetes v1.34+ is not just writing NetworkPolicy objects. It is pairing Kubernetes-native policy with an eBPF enforcement plane and live flow visibility, so you can see allowed traffic, blocked traffic, DNS behavior, and policy regressions as they happen.

Prerequisites

Before you begin

  • A running Kubernetes v1.34+ cluster.
  • A Linux kernel that meets current Cilium requirements. The stable docs list 5.10+ as the minimum baseline.
  • kubectl, helm, and cluster-admin access.
  • An existing cluster where you can replace or install the CNI.
  • Time to validate policy behavior from both the data plane and the application layer.

Reference docs: Cilium system requirements, Kubernetes compatibility, and Kubernetes NetworkPolicy.

1. Validate the baseline

First, confirm that the cluster is on a supported Kubernetes release and that your nodes expose a recent enough kernel. This matters because eBPF features are kernel-backed, not just a Kubernetes API switch.

kubectl version --short
kubectl get nodes -o wide
uname -r

You want to see a server version in the 1.34.x or newer line and node kernels that satisfy Cilium's requirement. If you are migrating from another CNI, stop here and plan the cutover cleanly. Installing Cilium on top of a conflicting networking stack is the fastest route to ambiguous failures.

2. Install Cilium and Hubble

The simplest defensible setup is Cilium with Hubble enabled from day one. Cilium handles enforcement through eBPF programs attached in the kernel; Hubble exposes flow and verdict data so you can prove your policy works.

helm repo add cilium https://helm.cilium.io/
helm repo update

helm install cilium cilium/cilium \
  --namespace kube-system \
  --create-namespace \
  --set hubble.relay.enabled=true \
  --set hubble.ui.enabled=true

Then install the CLI and check health. The official quick-start also supports using the dedicated cilium CLI directly if that is your standard operational path.

cilium status --wait
cilium version
cilium hubble port-forward &

If you are designing for maximum performance later, Cilium also supports kube-proxy replacement. For a first security rollout, keep the topology simple and get policy correctness and observability right before you optimize the service path.

3. Deploy a demo workload

Use a small namespace with one client and one API. The client will be allowed to call only the API service, and anything else should be dropped or denied once policy lands.

kubectl create namespace security-demo

kubectl apply -f - <<'EOF'
apiVersion: apps/v1
kind: Deployment
metadata:
  name: api
  namespace: security-demo
spec:
  replicas: 1
  selector:
    matchLabels:
      app: api
  template:
    metadata:
      labels:
        app: api
    spec:
      containers:
      - name: api
        image: hashicorp/http-echo:1.0
        args:
        - -listen=:5678
        - -text=secure-api
        ports:
        - containerPort: 5678
---
apiVersion: v1
kind: Service
metadata:
  name: api
  namespace: security-demo
spec:
  selector:
    app: api
  ports:
  - port: 8080
    targetPort: 5678
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: client
  namespace: security-demo
spec:
  replicas: 1
  selector:
    matchLabels:
      app: client
  template:
    metadata:
      labels:
        app: client
    spec:
      containers:
      - name: client
        image: curlimages/curl:8.8.0
        command: ['sleep', '3600']
EOF

Wait until both pods are ready before you add restrictions.

kubectl -n security-demo get pods -w

4. Apply default-deny and allow rules

Start from a zero-trust posture. Kubernetes NetworkPolicy gives you a portable API, while Cilium enforces that policy with eBPF in the data path.

kubectl apply -f - <<'EOF'
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-all
  namespace: security-demo
spec:
  podSelector: {}
  policyTypes:
  - Ingress
  - Egress
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-client-to-api-and-dns
  namespace: security-demo
spec:
  podSelector:
    matchLabels:
      app: client
  policyTypes:
  - Egress
  egress:
  - to:
    - podSelector:
        matchLabels:
          app: api
    ports:
    - protocol: TCP
      port: 8080
  - to:
    - namespaceSelector:
        matchLabels:
          kubernetes.io/metadata.name: kube-system
    ports:
    - protocol: UDP
      port: 53
    - protocol: TCP
      port: 53
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-api-from-client
  namespace: security-demo
spec:
  podSelector:
    matchLabels:
      app: api
  policyTypes:
  - Ingress
  ingress:
  - from:
    - podSelector:
        matchLabels:
          app: client
    ports:
    - protocol: TCP
      port: 8080
EOF

This is the pattern most teams should normalize around. The namespace is closed by default, DNS is explicitly reopened, and the only application path left open is client -> api:8080.

5. Observe flows in real time

Now verify not just application behavior, but kernel-enforced verdicts. This is where eBPF becomes operationally useful instead of academically interesting.

kubectl -n security-demo exec deploy/client -- curl -s http://api:8080
kubectl -n security-demo exec deploy/client -- curl -I https://example.com

hubble observe --namespace security-demo --follow
hubble observe --namespace security-demo --verdict DROPPED

The first curl should succeed and return the demo payload. The second should fail because egress to the public internet was never allowed. In Hubble, you should see accepted flows for DNS and API traffic, and dropped flows for disallowed egress. That feedback loop is the difference between policy intent and verified policy reality.

Verification and expected output

Run both control-plane and data-plane checks before you call the rollout complete.

cilium status
cilium connectivity test
kubectl get networkpolicy -n security-demo
hubble observe --namespace security-demo --last 20

Expected results:

  • cilium status shows agents, operator, and Hubble components healthy.
  • cilium connectivity test completes successfully. In current docs, a healthy run ends with 69/69 tests successful for the referenced scenario.
  • The client pod can reach http://api:8080.
  • The client pod cannot reach arbitrary external destinations.
  • Hubble shows FORWARDED events for allowed traffic and DROPPED events for blocked flows.

If your team pastes Hubble output into tickets or chat, sanitize sensitive hostnames, bearer tokens, and internal IPs first with TechBytes' Data Masking Tool.

Troubleshooting top 3

1. Policies exist, but traffic still flows everywhere

The most common cause is that the cluster is not actually enforcing NetworkPolicy with the intended CNI. Confirm Cilium is the active data plane and that there is no conflicting legacy CNI path still attached on nodes.

2. Everything broke after default-deny

That usually means DNS was blocked. The Kubernetes docs explicitly call out that deny-all egress also cuts DNS unless you add an allow rule. Verify the DNS namespace and labels used by your distro, then adjust the policy if your DNS pods are not in the expected shape.

3. Hubble shows nothing useful

Check that Hubble Relay is enabled, that the CLI is connected through cilium hubble port-forward, and that you are querying the correct namespace. If flow volume is high, add filters for namespace, pod, verdict, or protocol instead of assuming telemetry is missing.

What's next

Once this baseline is stable, move to higher-value controls: Cilium L7 HTTP policy for sensitive services, kube-proxy replacement where performance justifies the operational cost, and CI gates that run cilium connectivity test after every network policy change. Keep your manifests clean as they grow; TechBytes' Code Formatter is a simple way to normalize YAML and shell snippets before you publish them internally.

The main lesson is straightforward: on Kubernetes v1.34+, eBPF is most powerful when you treat it as a continuously verified security substrate. Standard policy objects define intent, Cilium enforces that intent in-kernel, and Hubble tells you in real time whether the cluster is actually behaving the way you designed it.

Get Engineering Deep-Dives in Your Inbox

Weekly breakdowns of architecture, security, and developer tooling — no fluff.