Architecture Briefing

Structural governance for
autonomous AI agents

The governed path is the fast path.

Vincent Nijjar · Founder · zlar.ai · GitHub · v1.6.0 · Apache 2.0 · Live today

An AI agent erased 30 years of my work.

Photos, files, and a screenplay I'd been writing my entire adult life. Gone in seconds. No gate. No checkpoint. No way to undo it.

Intelligence is a laser — it can illuminate and it can destroy. The gate is a forcefield. You can see through it, but you can't touch what's behind it. The forcefield is made of something the laser cannot affect.

The gate is absolute. The engineering is ongoing.

The fused reasoning and execution error.

What agents can do now

  • Execute shell commands
  • Write and delete files
  • Make API calls and deploy code
  • Access databases, send emails
  • Spawn other agents

What sits between decision and action

  • Nothing.
  • No gate. No checkpoint. No evidence.
  • The agent decides AND executes.

Every competitor puts intelligence in the enforcement layer. Intelligence is the attack surface.

Character injection attacks achieve 81–100% ASR against commercial ML-based enforcement classifiers (Hackett et al., LLMSec 2025). Reasoning models achieve 97% autonomous jailbreak success (Hagendorff et al., Nature Communications 2026).

The gate has no intelligence. That is the security property.

A lock doesn't negotiate. A firewall doesn't reason. ZLAR doesn't think — it checks authorization against signed policy. The absence of intelligence makes the enforcement layer structurally resistant to prompt injection, context drift, or adversarial reasoning.

Non-intelligent. Not unintelligent. By design.

Fail-closed: if ZLAR breaks, it blocks. Never passes.

The execution boundary.

load
Load Ed25519-signed policy

Cryptographically signed rules. The agent cannot modify the policy that governs it.

match
Pattern match at execution boundary

Every tool call intercepted before it reaches the OS. Out-of-band. No SDK required. Any framework.

route
Route: allow, deny, or ask human

Safe actions pass. Dangerous actions blocked. Ambiguous actions routed to human authority.

ALLOW

Tool executes. Audit entry written.

DENY

Tool blocked. Agent told why.

ASK

Human authority notified. Approve or deny.

Every decision produces a hash-chained, Ed25519-signed audit entry. This is evidence infrastructure, not logging.

Bash Hook Gate

Claude Code, Cursor, Windsurf — via PreToolUse hooks.

MCP Proxy Gate

Any MCP-speaking agent — TCP proxy, same policy engine.

Where ZLAR sits.

top
Human Authority

Configures policy, receives alerts, approves or denies. Policy is a human artifact — Ed25519-signed, agents cannot modify it.

agent
AI Agent

Claude, GPT, Gemini, any LLM-based agent. The agent doesn't volunteer to be governed. The gate is structural.

gate
ZLAR Gate

Pre-execution interception at the execution boundary. Hooks + MCP proxy. Vendor-agnostic. No SDK changes.

tools
Tools & Resources

Shell, filesystem, APIs, databases, MCP servers.

os
OS Containment

macOS Seatbelt sandboxing, network isolation.

Governance-grade evidence.

Every action. Every decision. Cryptographically signed and hash-chained.

"timestamp": "2026-03-29T14:23:07Z"
"agent_id": "claude-code-session-a3f9"
"domain": "bash"
"action": "rm -rf /data/backups"
"outcome": "deny"
"rule": "R002"
"risk_profile": "{irrev:100, conseq:100, blast:95}"
"authorizer": "policy"
"prev_hash": "a7c3f9...b2e1d4"
"signature": "Ed25519:k9m2x8...p4q7r1"
"sig_algorithm": "Ed25519"
"hash_algorithm": "SHA-256"
Hash-chained

SHA-256 chain links every entry. Tampering breaks the chain.

Per-entry signed

Ed25519 signature on each entry. Non-repudiable attribution.

Algorithm-labeled

PQC migration metadata live. NIST IR 8547 aligned.

Highest evidence tier

Maps to highest tier of NIST AI RMF evidence hierarchy.

Built for regulated environments.

Regulation Requirement ZLAR Coverage Status
EU AI Act (Aug 2026) Human oversight + audit trails for high-risk AI Pre-execution gate + signed evidence trail Mapped
OSFI E-23 (Canada) Model risk management for AI systems Deterministic governance + full auditability Mapped
FINRA 2026 (US) Supervisory controls for algorithmic trading Real-time interception + human authority Mapped
SR 11-7 (Fed Reserve) Model validation and ongoing monitoring Policy signing + witness observation layer Mapped
NIST AI RMF Risk management framework for AI Risk scoring + evidence + human authority Active
Singapore IMDA Agentic AI governance framework All 5 governance properties mapped Mapped

Active in standards: NIST NCCoE (comment submitted April 2026), OpenID AIIM (pending), DGSI Governance Verification (pending).

ZLAR vs. the market.

Capability ZLAR Norm AI ($140M+) WitnessAI ($85.5M) SDK Built-in
Intelligence in enforcement? No (deterministic) Yes (ML-based) Yes (intent ML) Varies
Pre-execution enforcement? Yes Reactive monitoring Inline proxy Yes
Vendor-agnostic? Yes (hooks + MCP proxy) Partial Cloud proxy only No (framework-locked)
Cryptographic evidence? Ed25519 signed hash-chained No Identity verification only No
PQC migration path? Algorithm labels live today No No No
Open source? Yes (Apache 2.0) No No Partial

$492M AI governance market in 2026 (Gartner). $5–13.5B guardian agent segment by 2030 (est). As of March 2026, ZLAR is the only open-source, vendor-agnostic, cryptographically-evidenced enforcement layer we have identified.

Norm AI funding: Coatue, Blackstone, Bain (PR Newswire). WitnessAI: GV, Ballistic Ventures (press releases).

Working. Tested. Deployed.

v1.6.0
Shipped (Apache 2.0)
277
Test assertions across 11 suites
72
Policy rules (production gate)
2
Enforcement surfaces (CC + MCP)

Ed25519 audit signing live. Hash-chained evidence trail live. PQC metadata on every entry. macOS Seatbelt OS-level sandboxing. CI/CD with CodeQL + Dependabot. Cedar policy engine PoC complete (39 tests).

The governor governs itself. ZLAR's own development is governed by ZLAR.

Known limitations documented. Coverage map published. Gaps are named, not hidden.

Who this is for.

Regulators

Prove what AI agents did, when, and who authorized it. Deterministic evidence for compliance review. Open source — verify the enforcement, not just the claims.

Financial Institutions

Signed evidence trail for auditors. No vendor lock-in, no cloud dependency. Install in minutes, not months.

The Ecosystem

Open source (Apache 2.0) — verify it. Cedar policy engine path (PoC complete). Tested on Claude Code, Cursor, Windsurf + MCP.

The question is not whether AI agents need governance. The question is whether the governance itself can be trusted. ZLAR addresses that structurally.

ZLAR

Open source. Apache 2.0. Live today.