Ch 6: Approvals, Policies, and Sandboxing
The genius can help with bigger jobs now. That is useful. It is also risky.
Imagine the community center's old archive room. Tomorrow there is an open house, and the room needs to be cleared out.
Some chores are harmless. You can peek at labels. You can count boxes. You can move a chair out of the way.
But one careless cleanup order could destroy records that still matter.
So there is a permit desk at the door.
Simple chores pass through. Questionable chores pause for review. Dangerous chores are stopped on the spot.
That is what approvals do for an AI helper. They do not make the genius weaker. They make risky actions reviewable before anything irreversible happens.
Why The Permit Desk Exists
Not every errand deserves the same amount of trust.
- Checking which shelf is dusty is low risk.
- Moving one box that is already marked for disposal is manageable.
- Ordering an entire wall of boxes to be thrown out is dangerous.
- Handling private records carelessly can cause damage that is hard to notice until later.
The genius does not feel consequences the way you do. It can follow instructions. It cannot feel the sinking panic of realizing something important is gone forever.
That is why risky errands need a checkpoint.
Some requests should go straight through. Some should pause for a human yes-or-no. Some should never be allowed at all.
Please clear the old archive room before tomorrow's open house.
Put the permit-desk steps in the correct order
Drag to reorder, or use Tab + Enter + Arrow keys.
- The genius asks to do something risky
- The inspector reviews how broad the action is
- The unsafe version is stopped
- The narrower version pauses for your permit
- The approved cleanup happens safely
Key Insight: Safe By Default
The safest system does not try to memorize every bad idea in the world.
Instead, it uses three simple lanes:
- clearly safe chores can go through
- uncertain chores pause for review
- obviously dangerous chores stop immediately
That way, a surprising request does not slip through just because nobody predicted it ahead of time.
The default answer to uncertainty should be: stop or ask first, not go ahead and hope.
What You Learned
You now understand why a capable helper needs a permit desk.
Risky actions should be reviewed before they happen. Too-broad actions should be stopped. Specific, reviewable actions can still move forward after a human decision.
In Chapter 7, the genius gets another layer of protection: a sealed room where experiments can happen without touching the main workspace.
What's Next
You now have two layers of safety: structured patches (Chapter 5) for safe file editing, and policy gates (this chapter) for controlling what the agent can do. But there is still a fundamental problem: the agent works in your main working directory.
If the agent is working on a feature and you are also editing files, changes collide. If two agents run in parallel, they step on each other. You need isolation — each task should get its own copy of the codebase.
In Chapter 7: Worktree Isolation, you will use git worktrees to give each agent task its own isolated workspace, preventing cross-contamination and enabling safe parallel execution.