Security

Sysdig Runtime Security: Monitoring Claude Code and Gemini Agents

Dillip Chowdary

Dillip Chowdary

March 24, 2026 • 10 min read

When AI agents start writing code and executing system commands, the traditional security perimeter disappears. Sysdig's eBPF engine is now watching the agents.

The rise of **Agentic AI**—exemplified by tools like **Claude Code**, **Gemini Agents**, and **OpenClaw**—has introduced a new category of risk to the enterprise. Unlike traditional chatbots, these agents have the authority to interact with file systems, execute shell commands, and even manage cloud infrastructure. As of March 2026, **Sysdig** has addressed this challenge by launching specialized runtime security primitives for the agentic era.

The "Agent-in-the-Middle" Risk

Traditional EDR (Endpoint Detection and Response) and CWPP (Cloud Workload Protection Platforms) are designed to detect human-driven attacks or malicious scripts. However, an AI agent's actions can look suspiciously like a legitimate developer's activity. If an agent is compromised via **Prompt Injection**, it could be tricked into exfiltrating sensitive data or creating backdoors, all while appearing to "help" the user.

This is the "Agent-in-the-Middle" risk: the gap between the agent's high-level intent and its low-level system execution. Sysdig's new module closes this gap by providing deep visibility into what the agent is *actually* doing at the kernel level.

Leveraging eBPF for Agent Integrity

Sysdig utilizes **eBPF (Extended Berkeley Packet Filter)** to intercept every system call made by an agent process. By mapping these syscalls back to the agent's "thought process" (its prompt and output logs), Sysdig can detect deviations in real-time. Key features include:

Integrating with the AI Stack

The beauty of the Sysdig approach is its integration with the major AI platforms. It provides native plugins for **Anthropic's MCP (Model Context Protocol)** and **Google Cloud's Vertex AI**, allowing security teams to correlate kernel events with specific LLM traces. This means you can see exactly which user prompt triggered a suspicious system call.

Secure Your Agents

Don't deploy AI agents blindly. Use **ByteNotes** to document your agent security policies and keep your runtime environment safe.

Conclusion: Trust But Verify

As AI agents become indispensable to the modern developer, we must move toward a model of "Trust But Verify." Sysdig's runtime security provides the "Verify" part of that equation. By monitoring the actual execution of agentic intent, enterprises can embrace the productivity of AI without sacrificing the integrity of their systems. For the CISO of 2026, eBPF-powered agent monitoring is no longer optional—it's a requirement.