AI agents are starting to do real things.
They can:
Most people try to solve this in one of three ways:
That is too late, and not strong enough.
Once the act has happened, the harm may already be done.
And if you ask one intelligence to judge another intelligence, both can be fooled.
So the real question is not:
The real question is:
How do we stop it before it crosses the line?
How do we let it work, while keeping it inside what a human actually allowed?
That is the problem.
The answer is ZLAR.
1. Give the agent a permission card.
Every agent gets a simple permission card. That card says: who it belongs to, what it may do, what it may never do, what happens when it reaches something unclear, and when its permission runs out.
That card is the manifest.
Think of it as an authority card — or the circle of what I allow.
Not a brain. Not a personality. Not a script for how to think. Just the boundary.
2. Put a gate in front of every real act.
The agent can think all it wants.
It cannot act without crossing the gate.
The gate asks: is this allowed? Is this forbidden? Is this outside the boundary? Does a human need to be asked?
If the answer is no, it does not happen.
That is the whole point. Do not wait for the act, then study it. Stop it before it becomes real.
3. Keep a record that cannot be quietly changed.
Every important attempt is written down in a tamper-evident record. So later you can prove: what the agent tried to do, what was allowed, what was blocked, who approved it, and whether the rules were followed.
That matters for trust. It matters for responsibility. It matters for law.
4. Keep human judgment where it belongs.
Inside its boundary, the machine can move fast.
At the edge, it must ask a human.
So the human is not trapped in constant busywork. The human is called only when judgment is truly needed.
That is the model: the machine handles the routine. The human handles the point of consequence.
My view is simple:
Do not ask if the machine seems good. Ask if this act was allowed.
That is the difference.
Most people focus on behavior: does this look safe? Does this seem risky? Do we trust this model?
I focus on authority: was this allowed? By whom? Within what boundary? What happens if it goes past that boundary?
That is not the language of vibes. That is the language of law, finance, access, and rule.
Because it is simpler where it matters.
I am not trying to make the judge smarter. I am making the boundary harder.
That matters because:
And the shape is now clear:
The manifest sets the boundary. The gate enforces it.
That keeps the manifest simple. That keeps the gate real. That stops the whole system from turning into one giant, muddy document.
The machine gets a signed permission slip.
Before it can do something real, it has to cross a gate.
The gate checks whether that act is allowed.
If it wants to go beyond what was allowed, it has to ask a human first.
That is the whole idea.
Vincent Nijjar — ZLAR
April 2026