Phase 2 · Vincent Nijjar · April 2026

My AI Governance System Has a Hilarious and Frightening Flaw

The gate controls possibility. The recorder records reality. But if the policy is wrong, damage is guaranteed.

Every door is locked by default. The agent cannot try and be stopped. There is no door to knock on. There is no inspection. The only way a door opens is if policy explicitly unlocks it. So the agent has no choice but to meet the gate. That's the first thing.

The second thing: everything that passes through or is blocked — it's recorded by an instrument designed to record things as they actually happened.

So: the gate controls possibility. The recorder records reality.


People on LinkedIn are helping me build through public comments or DMs. They keep saying: architecture is fine but what if it's a bad policy? And I keep thinking — why are people not finding this build a holy shit moment? Why is the architecture alone not exciting?

But I think I finally understand my confusion. And confusion matters. Because if there's no clarity here, I can't see the boundary between the space inside the force field and the space outside it. But I see it now.

Inside the system, ZLAR is absolute. Outside the system, ZLAR does not exist. That's the boundary. I've been defending the inside. People keep pointing at the outside. They're right and I've been missing that till now.


Now I see what I've actually built: a system that ensures only authorized outcomes occur. Not good outcomes. Authorized ones.

Before ZLAR, when something goes wrong, we say: it slipped through. After ZLAR, we can only say: we explicitly allowed this. Every outcome traces back to a human decision. Even the policy is a human decision.

If the policy is wrong, damage is guaranteed. ZLAR will authorize and fully record the damage with no stopping it. This is a hilarious and frightening flaw. I'm glad people pointed it out to me.


So here's where I am now then.

I'm looking to build policy design because if people trust the policy, they'll trust the system.

Phase 2 begins now.

ZLAR is open source at github.com/ZLAR-AI/ZLAR. Reach me at vincent@zlar.ai.