Founder memo · May 2026 · edited web version

An Open Memo on Action-Level Assurance

Addressed to Canada's AI policy community. The architectural argument applies wherever AI governance frames meet runtime.

By Vincent Nijjar

High-impact agentic AI systems should produce verifiable action-level evidence.

ZLAR is an action-level assurance layer for agentic AI.

ZLAR places a deterministic governance boundary around AI-agent actions. Routine actions pass through. Actions of consequence route to a human on a separate device, on infrastructure held by the human and outside the agent's runtime.

For actions routed through ZLAR, the system produces signed, tamper-evident evidence built for independent verification.

ZLAR is Canadian-built and operator-held: policy, decisions, and records belong to the deploying institution.

Three Lenses For AI Governance

Present AI governance frameworks answer one question: should this AI system be deployed, and was it deployed responsibly? Call this the system lens. Its instruments ask about the AI system as the object of governance.

A second question is downstream of the first. Once an AI system is deployed, it takes actions. Should this action happen, and is there a human accountable for it at the moment it does? Call this the action lens. Its instruments are different: deterministic policy at the action boundary, off-device human authorization, tamper-evident audit, decision receipts.

A third question sits beneath both. The action lens depends on a real boundary. The boundary sits between the agent's cognition and the agent's act. The agent may think, process, and deliberate freely. The question for governance is whether a specific consequence is permitted to cross from thought into the world.

Where does the boundary sit? Does it hold against the agent that the action belongs to? Where does the enforcement layer sit, relative to the agent's compromise surface? Call this the boundary lens.

When the boundary is real, the action lens has something verifiable to point at. ZLAR is built to be a real boundary.

Protecting And Empowering People

Protecting people from agentic AI requires a runtime control that intercepts the action before it becomes real, evaluates it against signed human-authored policy, and either allows it, denies it, or routes it to a human on a separate device held outside the agent's runtime.

Empowering people in an agentic AI world means giving the affected human a record they can read, understand, and challenge.

When an AI agent changes a record, blocks a workflow, accesses a file, triggers a decision, or causes operational harm, the affected person deserves a transparent record. They should be able to ask: what happened, which system acted, was a human involved, who approved it, what policy allowed it, and how to challenge it.

A coverage report names where the governance boundary held and where it ended. The report is honest about scope and shows what the architecture covers.

Where The Boundary Sits

ZLAR places signed, deterministic, human-authored policy at the action boundary. The policy evaluates each consequential action against rules written by humans and signed by a human authority. High-risk actions route to a human on a separate device, over a separate channel held outside the agent's runtime.

This places human authority structurally outside the agent's reach.

Many governance approaches take a different route: they put AI in the enforcement path, such as a trust score, a behavioural classifier, a model-grounded check, or a content scanner. The agent's actions are evaluated by another AI system, often inside the same runtime, on the same network, under the same compromise surface.

The boundary lens distinguishes the two. ZLAR sits on the side where the boundary is real.

Where ZLAR Is Today

Working today:

  • Deterministic policy enforcement for governed actions.
  • Signed policy evaluation.
  • Allow, deny, or human-approval decisions at the action boundary.
  • Off-device human approval over a channel held outside the agent's runtime.
  • Tamper-evident audit chain.
  • Decision receipts, signed and locally verifiable, operator and auditor-facing.
  • Trust Lane attention-check mechanism in the current Claude Code deployment.
  • Software-rooted signing authority for policy and constitution.

Designed and on the near-term roadmap:

  • Packaged verifier kit for outside-party validation.
  • External witnessing and attestation.
  • Worker-facing decision receipt and explicit contestability surface for affected humans.

The Evidence Bundle

The evidence bundle is the record produced by a governed AI-agent run. It answers, for one run: what ZLAR governed, where the governance boundary held, what the agent attempted, what policy applied, what was allowed, denied, or escalated to a human, who authorized high-risk actions, whether the audit record still matches what was recorded, and whether signatures still verify.

AI governance is also concrete evidence. Ask for the record.

The Bounded Claim

ZLAR's claim is precise. For actions routed through ZLAR, the system applies deterministic policy and produces evidence of the governance decision.

ZLAR addresses the action layer. Model intent, bias evaluation, privacy law, and broader AI safety belong to their own frameworks and instruments. Action-level assurance complements them; the layers compose.

ZLAR's strength depends on deployment quality: the agent is routed through ZLAR, bypass paths are closed, the policy is signed and appropriate, approval flows are configured correctly, the audit chain is preserved, and external witnesses and verifiers are used.

The evidence bundle exists to make these conditions verifiable.

A Policy Proposal

Minimum action-level assurance for high-impact agentic deployments should include:

  • A defined execution boundary.
  • Deny-first policy for high-risk actions.
  • Human authorization for consequential actions, on infrastructure held outside the agent's runtime.
  • Tamper-evident audit records.
  • Decision receipts.
  • Coverage reports showing where governance held and where it ended.
  • A contestability path for affected humans.

The proposal complements model evaluation and system-level AI governance. Those frameworks set direction at the system level. Action-level assurance carries that direction into runtime.

To Canada's AI Policy Community

This memo is a request for engagement.

The question for federal policy is whether high-impact agentic systems should be expected to produce verifiable action-level evidence: a defined execution boundary, decision receipts, human authorization records, audit-chain verification, coverage reporting, and a contestability path.

Test the proposal against Canada's emerging responsible-AI strategy. Test it against ZLAR. Test it against any vendor or framework. The architectural fit is the test: execution boundary, off-device authority, signed evidence.

Run the agent. Export the proof. Show what was governed, what happened, who authorized it, and whether the evidence still verifies.

Disclosure

This memo is policy argument and founder voice. ZLAR's public claim remains bounded to actions that pass through routed/intercepted ZLAR gate surfaces.