Home Posts Fine-Grained RBAC for AI Agents with OPA/Rego [2026]
Security Deep-Dive

Fine-Grained RBAC for AI Agents with OPA/Rego [2026]

Fine-Grained RBAC for AI Agents with OPA/Rego [2026]
Dillip Chowdary
Dillip Chowdary
Tech Entrepreneur & Innovator · May 03, 2026 · 9 min read

Bottom Line

Treat authorization for AI agents as data plus policy, not hard-coded conditionals. With OPA and Rego v1, you can enforce tenant isolation, action-level permissions, and test coverage without rewriting your application logic.

Key Takeaways

  • Use default allow := false so missing rules fail closed by design
  • Model access as agent-to-role bindings plus role-to-permission mappings
  • Evaluate decisions locally with opa eval before embedding OPA in a service
  • Ship Rego with tests, coverage, and formatting checks using opa test and opa fmt

AI agents rarely need broad, long-lived access. A retrieval agent may read research notes, a billing agent may write invoices, and neither should cross tenant boundaries or call tools outside its lane. Open Policy Agent gives you a clean way to separate those decisions from application code. In this walkthrough, you will build fine-grained RBAC for agents with Rego v1, verify decisions locally, and package the policy so it is easy to test and evolve.

Prerequisites

Before you start

  • Install the current OPA CLI from the official project docs.
  • Create an empty project folder with policy.rego, data.json, and a few input files.
  • Use Rego v1 syntax with import rego.v1.
  • Have one concrete authorization use case in mind, such as tool access, memory access, or API actions.
  • If your sample requests contain secrets or customer data, sanitize them first with TechBytes' Data Masking Tool.

Bottom Line

Fine-grained agent authorization becomes manageable when you keep permissions in data and express enforcement in policy. OPA lets you deny by default, explain decisions, and test every edge case before rollout.

Step 1: Model your RBAC data

For agent systems, classic user-role mappings are not enough. You usually need tenant awareness and a narrower permission shape than simple read/write. A practical model has two layers:

  1. Agent-to-role bindings
  2. Role-to-permission mappings

Store both in data.json. The example below scopes access by resource kind, action, and sensitivity band.

{
  "role_bindings": {
    "agent-writer": ["doc_writer"],
    "agent-router": ["router"],
    "agent-admin": ["admin"]
  },
  "roles": {
    "doc_writer": {
      "permissions": [
        {"resource": "memory", "action": "read", "scope": "research"},
        {"resource": "memory", "action": "write", "scope": "research"}
      ]
    },
    "router": {
      "permissions": [
        {"resource": "tool", "action": "invoke", "scope": "search"}
      ]
    },
    "admin": {
      "permissions": [
        {"resource": "memory", "action": "read", "scope": "*"},
        {"resource": "memory", "action": "write", "scope": "*"},
        {"resource": "tool", "action": "invoke", "scope": "*"}
      ]
    }
  }
}

This structure stays readable as the system grows. If you later add environment, model, or cost ceilings, they can become additional permission attributes instead of another round of hard-coded conditionals.

Step 2: Write the Rego policy

Now write a policy that answers one question: should this agent perform this action on this resource? Keep the default closed, then open only the paths you intend to support.

package agent.authz

import rego.v1

default allow := false

allow if {
  input.actor.type == "agent"
  input.actor.tenant == input.resource.tenant

  some role in data.role_bindings[input.actor.id]
  some perm in data.roles[role].permissions

  perm.resource == input.resource.kind
  perm.action == input.action
  scope_match(input.resource.scope, perm.scope)
}

scope_match(required, allowed) if {
  allowed == "*"
}

scope_match(required, allowed) if {
  required == allowed
}

reason := "allowed" if allow

reason := "cross-tenant access blocked" if {
  input.actor.tenant != input.resource.tenant
}

reason := "permission missing" if {
  not allow
  input.actor.tenant == input.resource.tenant
}

decision := {
  "allow": allow,
  "reason": reason,
  "roles": object.get(data.role_bindings, input.actor.id, [])
}

Why this works

  • default allow := false makes undefined cases deny automatically.
  • Tenant equality prevents lateral movement across customer boundaries.
  • some role and some perm let Rego search the binding and permission sets declaratively.
  • The decision object gives callers a stable result shape for logs, audits, and debugging.
Pro tip: Return a structured decision object, not just a boolean. Operations teams will eventually need the reason, matched roles, or policy metadata when an agent is blocked in production.

Step 3: Evaluate decisions locally

Create an allow case first. Save this as allow.json.

{
  "actor": {
    "id": "agent-writer",
    "type": "agent",
    "tenant": "acme"
  },
  "action": "write",
  "resource": {
    "kind": "memory",
    "scope": "research",
    "tenant": "acme"
  }
}

Run a local decision using opa eval.

opa eval -d policy.rego -d data.json -i allow.json "data.agent.authz.decision"

Expected output

{
  "allow": true,
  "reason": "allowed",
  "roles": ["doc_writer"]
}

Now test a deny case with a tenant mismatch in deny.json.

{
  "actor": {
    "id": "agent-writer",
    "type": "agent",
    "tenant": "acme"
  },
  "action": "write",
  "resource": {
    "kind": "memory",
    "scope": "research",
    "tenant": "globex"
  }
}
opa eval -d policy.rego -d data.json -i deny.json "data.agent.authz.decision"
{
  "allow": false,
  "reason": "cross-tenant access blocked",
  "roles": ["doc_writer"]
}

If you only test boolean outcomes, you will miss explainability gaps. Checking the full decision object early avoids that trap.

Step 4: Test, format, and serve

Move from manual checks to repeatable tests. Create policy_test.rego.

package agent.authz_test

import rego.v1
import data.agent.authz

test_writer_can_write_research_memory if {
  authz.allow with input as {
    "actor": {"id": "agent-writer", "type": "agent", "tenant": "acme"},
    "action": "write",
    "resource": {"kind": "memory", "scope": "research", "tenant": "acme"}
  }
}

test_writer_cannot_cross_tenants if {
  not authz.allow with input as {
    "actor": {"id": "agent-writer", "type": "agent", "tenant": "acme"},
    "action": "write",
    "resource": {"kind": "memory", "scope": "research", "tenant": "globex"}
  }
}

test_router_cannot_write_memory if {
  not authz.allow with input as {
    "actor": {"id": "agent-router", "type": "agent", "tenant": "acme"},
    "action": "write",
    "resource": {"kind": "memory", "scope": "research", "tenant": "acme"}
  }
}

Run the test suite, then check coverage.

opa test policy.rego policy_test.rego data.json
opa test --coverage --format=json policy.rego policy_test.rego data.json

Expected output

PASS: 3/3

Before you commit, normalize formatting with opa fmt.

opa fmt -w policy.rego
opa fmt -w policy_test.rego

When you are ready to integrate, run OPA as a service with opa run --server and query it from your application or sidecar flow.

opa run --server policy.rego data.json
Watch out: Do not hide authorization inside agent orchestration code after introducing OPA. If some paths enforce policy in Rego and others use inline conditionals, your audit trail and threat model will drift fast.

Troubleshooting and What's next

Top 3 issues

  • Everything denies: Check for a mismatch between input.resource.kind, input.action, and the permission objects in data.json. Rego comparisons are exact.
  • Rules look correct but tests fail: Confirm every module uses import rego.v1 and that your test package imports data.agent.authz rather than redefining data locally.
  • Policy becomes hard to maintain: Split large permission sets into separate data files and add more tests before adding exceptions. Complexity in authorization should move into policy data, not into nested rule branches.

What's next

  • Add attributes beyond roles, such as environment, model family, tool risk, or spend tier, to evolve from pure RBAC toward hybrid policy.
  • Push policy bundles through CI so every policy change runs opa test and coverage checks before deploy.
  • Emit decision logs with agent ID, tenant, action, and reason so blocked operations are easy to trace.
  • If your team edits JSON fixtures often, pair policy work with a formatter workflow such as the TechBytes Code Formatter to keep examples and payloads clean during review.

The practical pattern is simple: model permissions as data, deny by default, and keep the decision surface narrow. Once you have that baseline, OPA scales cleanly from one agent to dozens of specialized services without turning authorization into application glue code.

Frequently Asked Questions

How do I implement fine-grained RBAC for AI agents with OPA? +
Model agent-to-role bindings in data, map roles to narrow permissions, and evaluate every request through a Rego policy. The core pattern is deny by default, then allow only when actor, tenant, action, and resource attributes all match.
Should AI agent authorization use RBAC or ABAC in OPA? +
Start with RBAC when your permissions naturally group into stable roles such as writer, router, or admin. Add attributes like tenant, scope, environment, or model tier on top when role checks alone are too coarse; OPA handles that hybrid model well.
How do I test Rego policies before putting them in production? +
Write unit tests in a separate _test.rego module and run them with opa test. Add --coverage --format=json to see which policy lines are exercised, then format the files with opa fmt so policy diffs stay readable.
Can OPA explain why an agent request was denied? +
Yes. Instead of returning only true or false, expose a structured rule such as decision with fields like allow, reason, and matched roles. That makes audit logs and operator debugging much easier.

Get Engineering Deep-Dives in Your Inbox

Weekly breakdowns of architecture, security, and developer tooling — no fluff.

Found this useful? Share it.