Agents change the physics of risk. As I’ve noted, an agent doesn’t just recommend code. It can run the migration, open the ticket, change the permission, send the email, or approve the refund. As such, risk shifts from legal liability to existential reality. If a large language model hallucinates, you get a bad paragraph. If an agent hallucinates, you get a bad SQL query running against production, or an overenthusiastic cloud provisioning event that costs tens of thousands of dollars. This isn’t theoretical. It’s already happening, and it’s exactly why the industry is suddenly obsessed with guardrails, boundaries, and human-in-the-loop controls.
I’ve been arguing for a while that the AI story developers should care about is not replacement but management. If AI is the intern, you are the manager. That is true for code generation, and it is even more true for autonomous systems that can take actions across your stack. The corollary is uncomfortable but unavoidable: If we are “hiring” synthetic employees, we need the equivalent of HR, identity access management (IAM), and internal controls to keep them in check.
All hail the control plane
This shift explains this week’s biggest news. When OpenAI launched Frontier, the most interesting part wasn’t better agents. It was the framing. Frontier is explicitly about moving beyond one-off pilots to something enterprises can deploy, manage, and govern, with permissions and boundaries baked in.



