Whether you asked for it or not
For years, we have talked about cloud-first strategies, with the big hyperscalers competing on compute, storage, databases, and global reach. Generative AI changed the game. The center of gravity is shifting from generic infrastructure to AI-native platforms: GPUs, proprietary foundation models, vector databases, agent frameworks, copilots, and AI-integrated everything.
You can see the shift in how providers talk about themselves. Earnings calls now highlight GPU and AI accelerator spending as the new core investment. Homepages and conferences lead with AI platforms, copilots, and agentic AI, while traditional IaaS and PaaS take a back seat. Databases, developer tools, workflow engines, and integration services are all being refactored or wrapped with AI capabilities that are enabled by default or just a click away.
At first glance, this appears to be progress. You see more intelligent search, auto-generated code, anomaly detection, predictive insights, and AI assistants integrated into every console. However, behind the scenes, each of these conveniences typically relies on proprietary APIs, opinionated data formats, and a growing assumption that your workloads and data will stay within that cloud.



