Still, it’s worth understanding what Microsoft is doing, as the technologies it’s implementing will affect you and your virtual infrastructure.
Cooling CPUs with microfluidics
Russinovich’s first presentation took a layered approach to Azure, starting with how its data centers are evolving. Certainly, the scale of the platform is impressive: It now has more than 70 regions and over 400 data centers. They’re linked by more than 600,000 kilometers of fiber, including links across the oceans and around the continents, with major population centers all part of the same network.
As workloads evolve, so do data centers, requiring rethinking how Azure cools its hardware. Power and cooling demands, especially with AI workloads, are forcing redesigns of servers, bringing cooling right onto the chip using microfluidics. This is the next step in liquid cooling, where current designs put cold plates on top of a chip. Microfluidics goes several steps further, requiring a redesign of the chip packaging to bring cooling directly to the silicon die. By putting cooling right where the processing happens, it’s possible to increase the density of the hardware, stacking cooling layers between memory, processing, and accelerators, all in the same packaging.



