Essay · Vincent Nijjar · April 2026

The Wrong Layer

Amazon Bedrock Guardrails governs language. Agent actions live below it.

Bedrock Guardrails is a language filter.

It evaluates inputs going into a model and outputs coming out. Topics you want blocked. PII you want redacted. Content categories you want removed. Hallucinations you want flagged. AWS’s own AI models do the evaluating.

That is what you are buying.


When a Bedrock agent calls a tool — writes to S3, queries a database, calls an API, sends a message — the action happens at the execution layer. Guardrails is above it. The language context gets evaluated. The tool call goes through.

By the time Guardrails sees anything, the action may already be in flight.

This is where Guardrails was designed to live. AWS built output filtering and agents on the same platform, and the filtering works at the language level. Enterprise buyers are purchasing this as governance. These are different products.

AWS shipped a second product in March 2026. Amazon Bedrock AgentCore Policy intercepts tool calls at the execution layer using Cedar rules — deterministic enforcement, not language filtering. That is the right interception point.

It enforces automatically. Policy decides. The human approves nothing. And it is still sold by the same entity running the agents it governs.


The second problem is the evaluator.

Guardrails uses an AI model to evaluate whether an AI model’s behavior is safe. The same language. The same failure modes. Prompt injection techniques that compromise the principal model have a meaningful probability of compromising the classifier.

The enforcement layer has to be a different kind of thing from the agent. ZLAR’s gate matches patterns against signed rules. It does not reason. An agent can argue with it for a thousand tokens and the gate will not respond, because the gate has no response capability. That is the security property.


The third problem is structural.

Amazon sells Bedrock. Amazon sells Guardrails. Amazon sells AgentCore Policy. The entity with revenue from the agent is the entity certifying the agent’s behavior as safe.

Financial services solved this in the 1930s. Independent audit means organizational separation — the auditor’s incentive structure must be orthogonal to the platform’s. AWS’s runs directly through the platform being governed.


The evidence problem compounds all of this.

A Guardrails-governed action produces a log entry in a vendor-controlled account. A regulator, auditor, or opposing counsel who wants to verify what happened calls AWS.

A governed action needs to produce something portable. Something verifiable after the session ends, after the account closes, after the vendor pivots.

ZLAR produces a v1 Governed Action Receipt: Ed25519-signed, hash-chained. The public key is published. Anyone verifies without platform access, without credentials, without a support ticket.


To AWS

You have the deployment footprint to do this correctly. The identity layer. The enterprise trust.

AgentCore Policy gets the interception layer right. What it still needs is human routing for consequential decisions and a governance function that is structurally independent of the platform revenue. The second requires a different business. The first is an engineering decision.


To enterprise buyers

Guardrails filters harmful language. Redacts PII. Flags content violations. Buy it for those things.

AgentCore Policy enforces rules at the tool call boundary. That is closer to governance.

For consequential agent actions — the ones that move money, modify records, send communications, make decisions that affect people — you need governance that stops the action before execution, routes it to a named human authority for approval, and produces verifiable evidence of what was authorized and by whom.

Something will go wrong. The question is whether governance stopped it first, or whether you are explaining it afterward.

ZLAR is open source at github.com/ZLAR-AI/ZLAR. Reach me at vincent@zlar.ai.