"Human-in-the-loop" is the most overloaded term in agentic AI today. It can mean anything from "click confirm on this suggestion" to "rewrite the agent's entire output because we don't trust it." The ambiguity isn't benign—it makes deployment risky, testing impossible, and governance unenforceable.
What we need is precision. And precision comes from thinking about HITL not as a feature, but as a contract—a formal agreement between the agent and the human about who decides what, and when.
The contract framing
In software, a contract is a guarantee. A function's contract says: if you pass this input, you get this output. If the contract is violated—if the function doesn't uphold its postcondition—the system fails loudly.
Human-in-the-loop gates should work the same way. Every gate has preconditions (when does this gate activate?) and postconditions (what must the human do before the agent proceeds?). Treat those as enforceable requirements, not suggestions.
If a gate is just a notification, it's not a gate.
The distinction matters. A notification is passive; a gate is active. A gate refuses to proceed until a contract is satisfied. That refusal happens at runtime, not in a Slack message or a manual review process.
Four contract types
Here's a framework for the different kinds of human approval gates that appear in production agents:
Advise
Agent suggests a course of action; human acts. The gate is transparent but not blocking. Example: "Send this email?"—human still has to click send. Lowest stakes, cheapest to implement.
Approve
Agent prepares an action; human authorizes it before execution. The gate blocks execution until signed off. Example: "Transfer $50K?"—agent waits until human approves. This is production standard.
Monitor
Agent acts autonomously; human audits after the fact. Gate is async—agent doesn't wait for human. Requires robust audit logs and rollback capability. Example: "Respond to this email, log it for review."
Intervene
Agent acts; human can pause or take over mid-flight. Gate is real-time and interactive. Hardest to engineer; requires streaming, hot interrupts, and state snapshotting. Example: "Walk me through your reasoning; I'll stop you if needed."
Why "contract" matters
A contract is enforceable at runtime. That's the difference between safety theatre and actual safety.
If your system says "approval required" but continues anyway if approval doesn't come, you don't have a gate—you have a log message. The agent must refuse to execute until the contract is satisfied.
Here's what that looks like in practice:
# Policy language: agent tools declare their contract
tools:
- name: send-email
contract: approve
timeout: 15m
escalation: "If no human response in 15 minutes, escalate to manager"
- name: read-inbox
contract: monitor
audit: required
rollback: supported
- name: schedule-meeting
contract: advise
audit: optional
- name: execute-trade
contract: approve
timeout: 5m
multi_sig: true # Requires 2 signatures
The runtime enforces these. A tool tagged with contract: approve raises an error if executed without a valid human signature. It doesn't retry, it doesn't guess, it stops.
Failure modes
When contracts break, we get:
- Rubber-stamping: Approval gates exist but humans approve everything reflexively. Happens when gates are too granular or too slow to respond. The gate becomes theatre; the contract is voided.
- Gate sprawl: Too many gates on too many operations. Humans learn to ignore them or work around them. Example: requiring approval for every email creates such friction that users disable the gate entirely.
- Escalation black holes: When human review fails or times out, the system has no fallback. Does the agent wait forever? Escalate to someone else? Execute anyway? These edge cases need explicit handling.
What this looks like in our runtime
In the ai-agents.bar platform, every tool can declare a contract type. The sandboxed runtime enforces it.
You deploy an email agent. The send-email tool defaults to contract: approve. When the agent tries to send, the runtime doesn't proceed until a human signature is captured and logged. That signature is cryptographically tied to the action.
You can downgrade the contract—move from Approve to Monitor—but only with an explicit policy change. There's no ambiguity. The contract is visible to everyone: the operator, the agent, the audit trail.
This is what makes governance real. Not hoping people will review things. Ensuring they can't proceed without review.
Closing
Human-in-the-loop isn't safety theatre if you treat it like an interface contract. Define preconditions, postconditions, timeouts, and escalations. Enforce them at runtime. Make the contract visible to humans and machines alike.
Otherwise, you've just added a notification. And notifications aren't gates.