Human-in-the-loop is not a feature — it's a contract
A framework for thinking about approval gates as enforceable contracts between operators and the agents they deploy.
Author
Head of AI Research
PhD in reinforcement learning from MIT. Previously at OpenAI working on tool-use evaluations.
Writing
A framework for thinking about approval gates as enforceable contracts between operators and the agents they deploy.
Most 'autonomous' agent demos are autonomy theatre. We unpack what the data says about user trust and adoption.
Engineering, research and product notes from the team building ai-agents.bar.