The outcomes of installs and upgrades can be different each time, even with the exact same model, but it gets a lot worse if you upgrade or switch models. If you’re supporting infrastructure for five, 10, or 20 years, you will be upgrading models. It’s hard to even imagine what the world of generative AI will look like in 10 years, but I’m sure Gemini 3 and Claude Opus 4.5 will not be around then.
The dangers of AI agents increase with complexity
Enterprise “applications” are no longer single servers. Today they are constellations of systems—web front ends, application tiers, databases, caches, message brokers, and more—often deployed in multiple copies across multiple deployment models. Even with only a handful of service types and three basic footprints (packages on a traditional server, image‑based hosts, and containers), the combinations expand into dozens of permutations before anyone has written a line of business logic. That complexity makes it even more tempting to ask an agent to “just handle it”—and even more dangerous when it does.
In cloud‑native shops, Kubernetes only amplifies this pattern. A “simple” application might span multiple namespaces, deployments, stateful sets, ingress controllers, operators, and external managed services, all stitched together through YAML and Custom Resource Definitions (CRDs). The only sane way to run that at scale is to treat the cluster as a declarative system: GitOps, immutable images, and YAML stored somewhere outside the cluster, and version controlled. In that world, the job of an agentic AI is not to hot‑patch running pods, nor the Kubernetes YAML; it is to help humans design and test the manifests, Helm charts, and pipelines which are saved in Git.



