The governed path is the fast path.
Photos, files, and a screenplay I'd been writing my entire adult life. Gone in seconds. No gate. No checkpoint. No way to undo it.
Intelligence is a laser — it can illuminate and it can destroy. The gate is a forcefield. You can see through it, but you can't touch what's behind it. The forcefield is made of something the laser cannot affect.
The gate is absolute. The engineering is ongoing.
Every competitor puts intelligence in the enforcement layer. Intelligence is the attack surface.
Character injection attacks achieve 81–100% ASR against commercial ML-based enforcement classifiers (Hackett et al., LLMSec 2025). Reasoning models achieve 97% autonomous jailbreak success (Hagendorff et al., Nature Communications 2026).
A lock doesn't negotiate. A firewall doesn't reason. ZLAR doesn't think — it checks authorization against signed policy. The absence of intelligence makes the enforcement layer structurally resistant to prompt injection, context drift, or adversarial reasoning.
Non-intelligent. Not unintelligent. By design.
Fail-closed: if ZLAR breaks, it blocks. Never passes.
Cryptographically signed rules. The agent cannot modify the policy that governs it.
Every tool call intercepted before it reaches the OS. Out-of-band. No SDK required. Any framework.
Safe actions pass. Dangerous actions blocked. Ambiguous actions routed to human authority.
Tool executes. Audit entry written.
Tool blocked. Agent told why.
Human authority notified. Approve or deny.
Every decision produces a hash-chained, Ed25519-signed audit entry. This is evidence infrastructure, not logging.
Claude Code, Cursor, Windsurf — via PreToolUse hooks.
Any MCP-speaking agent — TCP proxy, same policy engine.
Configures policy, receives alerts, approves or denies. Policy is a human artifact — Ed25519-signed, agents cannot modify it.
Claude, GPT, Gemini, any LLM-based agent. The agent doesn't volunteer to be governed. The gate is structural.
Pre-execution interception at the execution boundary. Hooks + MCP proxy. Vendor-agnostic. No SDK changes.
Shell, filesystem, APIs, databases, MCP servers.
macOS Seatbelt sandboxing, network isolation.
Every action. Every decision. Cryptographically signed and hash-chained.
SHA-256 chain links every entry. Tampering breaks the chain.
Ed25519 signature on each entry. Non-repudiable attribution.
PQC migration metadata live. NIST IR 8547 aligned.
Maps to highest tier of NIST AI RMF evidence hierarchy.
| Regulation | Requirement | ZLAR Coverage | Status |
|---|---|---|---|
| EU AI Act (Aug 2026) | Human oversight + audit trails for high-risk AI | Pre-execution gate + signed evidence trail | Mapped |
| OSFI E-23 (Canada) | Model risk management for AI systems | Deterministic governance + full auditability | Mapped |
| FINRA 2026 (US) | Supervisory controls for algorithmic trading | Real-time interception + human authority | Mapped |
| SR 11-7 (Fed Reserve) | Model validation and ongoing monitoring | Policy signing + witness observation layer | Mapped |
| NIST AI RMF | Risk management framework for AI | Risk scoring + evidence + human authority | Active |
| Singapore IMDA | Agentic AI governance framework | All 5 governance properties mapped | Mapped |
Active in standards: NIST NCCoE (comment submitted April 2026), OpenID AIIM (pending), DGSI Governance Verification (pending).
| Capability | ZLAR | Norm AI ($140M+) | WitnessAI ($85.5M) | SDK Built-in |
|---|---|---|---|---|
| Intelligence in enforcement? | No (deterministic) | Yes (ML-based) | Yes (intent ML) | Varies |
| Pre-execution enforcement? | Yes | Reactive monitoring | Inline proxy | Yes |
| Vendor-agnostic? | Yes (hooks + MCP proxy) | Partial | Cloud proxy only | No (framework-locked) |
| Cryptographic evidence? | Ed25519 signed hash-chained | No | Identity verification only | No |
| PQC migration path? | Algorithm labels live today | No | No | No |
| Open source? | Yes (Apache 2.0) | No | No | Partial |
$492M AI governance market in 2026 (Gartner). $5–13.5B guardian agent segment by 2030 (est). As of March 2026, ZLAR is the only open-source, vendor-agnostic, cryptographically-evidenced enforcement layer we have identified.
Norm AI funding: Coatue, Blackstone, Bain (PR Newswire). WitnessAI: GV, Ballistic Ventures (press releases).
Ed25519 audit signing live. Hash-chained evidence trail live. PQC metadata on every entry. macOS Seatbelt OS-level sandboxing. CI/CD with CodeQL + Dependabot. Cedar policy engine PoC complete (39 tests).
The governor governs itself. ZLAR's own development is governed by ZLAR.
Known limitations documented. Coverage map published. Gaps are named, not hidden.
Prove what AI agents did, when, and who authorized it. Deterministic evidence for compliance review. Open source — verify the enforcement, not just the claims.
Signed evidence trail for auditors. No vendor lock-in, no cloud dependency. Install in minutes, not months.
Open source (Apache 2.0) — verify it. Cedar policy engine path (PoC complete). Tested on Claude Code, Cursor, Windsurf + MCP.
The question is not whether AI agents need governance. The question is whether the governance itself can be trusted. ZLAR addresses that structurally.