“Enterprises relying heavily on high-speed SSDs often face steep costs, so they move infrequently accessed data to cheaper media. The trade-off has traditionally been complexity and latency, as accessing cold data can require switching systems and waiting through delays. What Google is now doing is eliminating those hurdles by making both hot and cold data accessible through the same database,” said Bradley Shimmin, lead of the data intelligence, analytics, and infrastructure practice at The Futurum Group.
The analyst was referring to storage products offered by nearly all hyperscalers: Google’s Cloud Storage, Amazon’s S3, and Azure’s Blob Storage, all of which provide frequent (hot), infrequent (cold), and archive tiers while integrating with their respective database offerings.
“In these integrations, the database offloads cold data to this external system. Enterprises often have to manage two separate systems, deal with data movement pipelines, and potentially use different query methods for hot vs cold data,” Shimmin said.



