The robots are free - whatever next?
- Nick Beaugeard
- 2 days ago
- 2 min read
I did something a bit risky with my coding AI today: I introduced an “unsafe mode” in my fork of OpenAI's Codex CLI project. 😅

In a nutshell, this mode disables the usual permission prompts my AI dev assistant would show before taking actions. No more “Allow this action?” pop-ups – it just goes for it. The result? A massive boost in speed and flow.
Now Codex in unsafe mode:
- Executes commands instantly without stopping to ask for approval
- Writes and refactors code autonomously, end-to-end
- Runs tests and even commits code on its own, while I monitor in the background
In short, it streamlines my development process like never before. I’m not clicking “Yes, proceed” five times an hour – the agent just powers through the tasks. It feels like having a supercharged junior developer who never takes a break. My workflow is smoother, and I can focus on higher-level problems while the grunt work happens automatically.
But “unsafe” is in the name for a reason, and I won’t sugar-coat the trade-offs. By removing those permission checkpoints, I’ve also removed a safety net. There’s no explicit oversight on each action now. If Codex decides to delete the wrong file or introduce a subtle bug, I might only catch it after the fact. That’s a scary thought. 🤨
Essentially, I’ve traded some safety for speed. I run this mode in a controlled sandbox (no production repos were harmed in the making of this experiment), but it’s still a trust fall with my AI. The increased velocity comes with the very real risk of unintended actions and unpredictable behavior on the agent’s part.
This little experiment has me thinking about the future of autonomous dev agents. On one hand, it’s amazing to see an AI developer move fast without hand-holding – a glimpse of how coding might be when our AI assistants work truly autonomously. On the other hand, it raises tough questions about trust and control. Are we ready to let software agents operate without constant human oversight? How do we balance efficiency and caution as these agents become more capable and self-directed?
I’m honestly both excited and anxious about where this could lead. Is removing the training wheels the next step in evolving smarter dev assistants, or a recipe for disaster without better safety mechanisms in place?
Would you ever run your AI in “unsafe mode” to optimise for speed? 🤔
Keen to hear your thoughts – is this the future of development, or have I just made my life easier and more difficult at the same time? Let’s discuss.
If you want to see how my bots perform, complete your and your project's details here:
- Lets see how good they are at writing the core documents, the requirements and technical specification - all for free
コメント