Public AI needs a record at the moment it acts.
Before AI updates a record, sends a notice, opens a file, or starts a service workflow, it should pass through a rule. If a person needs to decide, that person should be named. Afterward, there should be a receipt.
Preflight review does not answer what happened later.
A review can ask whether a system should be deployed. It cannot, by itself, show what a specific AI action did on a specific day.
ZLAR is built for that later moment. It checks the rule before the action happens, asks a real person when needed, and records the decision.
The public should not have to accept "the AI did it" as the record.
Plain questions need plain records.
What did the AI try to do?
The record should name the action, not only the system that produced it.
What rule was used?
The record should show whether the action was allowed, blocked, or sent to a person.
What was outside the doorway?
The record should be honest about what ZLAR saw and what it did not see.
Ask for the receipt.
A serious public-sector AI conversation should ask a simple thing: when the AI does something real, can the organization show what happened, what rule applied, who said yes if anyone did, and whether the record still verifies?
Next action
Request a briefing on rules before AI action, real human approval when needed, and receipts afterward.
Boundary
- ZLAR governs routed/intercepted action surfaces only.
- ZLAR complements pre-deployment assessment, privacy review, security review, and legal authority. It does not replace them.
- ZLAR does not claim external government approval or external verifier attestation.
- /contest is not implemented.