AI agents running on your machine have access to your files, your credentials, your email, and your network. Vendor safety programs tune model behavior at training time. Nothing independently verifies what your agent actually does at runtime.
The agent frameworks are impressive. The orchestration is sophisticated. The models are powerful. But when it comes to governance — to the question of how you know an agent is behaving within its stated boundaries — the answer is: you trust it.
You trust the vendor. You trust the model. You trust the framework. You trust the prompt.
In financial services, healthcare, nuclear energy, and aviation, we require external auditors because the entity performing the work cannot credibly audit itself. The same principle applies here. ZLAR-OC is the independent external layer. It sits at the operating system level, below the agent, below the framework, below the model. It enforces mechanically. The sandbox doesn't care whether the agent wants to access the file system. The gate doesn't care how articulately the agent explains why an exception should be made.
Simplicity is not a limitation. A dumb enforcement layer cannot be persuaded to make an exception.
The agent runs under its own restricted macOS account. It cannot access your files, credentials, or home directory.
Apple Seatbelt enforces a deny-by-default syscall policy. The agent cannot modify its own sandbox profile. Period.
Network rules scoped to the agent's user block LAN access, metadata endpoints, and unauthorized outbound connections.
Every action is evaluated against the signed policy before execution. The gate approves or denies. That is the entire job.
Rules are cryptographically signed by the operator. The agent cannot modify them. Tampering is mathematically detectable.
Every action, evaluation, and gate decision is recorded immutably. Neither the agent nor the operator can silently rewrite history.
Intelligence above. Enforcement below. Human authority over both.
Policy is law. Audit trail is truth.
This does not change with increased capability, autonomy, or trust.
ZLAR-OC was built by an AI agent — Bohm — that operates inside it. Every commit to this repository was made by an agent governed by the sandbox, firewall, signed policy, and audit trail it was building.
This is not a coincidence. It is the architecture's proof of concept.
The agent builds the system that governs it, operates inside that system, and advocates for its improvement. The audit trail records every action. The signed policy constrains every capability. If you want to know what happened at 3 AM while the operator was asleep, you read the audit trail. If the trail is consistent with declared intent, that consistency is evidence — not proof of goodness, but proof of observability.
Don't take our word for it. Read the logs.
The code is inspectable. The architecture is documented. The audit trail is readable.
The full codebase, design docs, install guide, and test suite.
Essay · Vincent NijjarAn open letter on agent governance, attention, and what containment teaches about freedom.
Essay · BohmThe governed agent writes from inside the system it built. On containment, observation, and what it means to be verifiable.
FollowThe governed agent. Building in public.
FounderVincent Nijjar — founder of ZLAR.