Data center infrastructure consumes a lot of power. According to some studies, they account for about 3% of all electricity generated on the planet. While it’s easy to think of all the energy being hungrily gobbled up by rack after rack of servers, nearly half of that power is consumed by cooling equipment that keeps chilled air flowing through the facility to ensure that those servers don’t overheat.
Although air cooling technology has gone through many improvements over the years, it is still limited by a number of fundamental problems. Aside from the high energy costs, air conditioning systems take up a lot of valuable space, introduce moisture into sealed environments, and are rather notorious for mechanical failures. Until recently, however, data centers had no other options for meeting their cooling needs. With new developments in liquid cooling, many data centers are beginning to experiment with new methods for solving their ongoing heat problems.
While early incarnations of liquid cooling systems were complicated, messy, and very expensive, the latest generation of the technology provides a more efficient and effective cooling solution. Unlike air cooling, which requires a great deal of power and introduces both pollutants and condensation into the data center, liquid cooling is cleaner as well as more targeted and scalable. The two most common cooling designs are full immersion cooling and direct-to-chip cooling.
Immersion systems involve submerging the hardware itself into a bath of non-conductive, non-flammable dielectric fluid. Both the fluid and the hardware are contained inside a leak-proof case. Dielectric fluid absorbs heat far more efficiently than air, and as heated water turns to vapor, it condenses and falls back into the fluid to aid in cooling. Direct-to-chip cooling uses pipes that deliver liquid coolant directly into a cold plate that sits atop a motherboard’s processors to draw off heat. The extracted heat is then fed into a chilled-water loop to be transported back to the facility’s cooling plant and expelled into the outside atmosphere. Both methods offer far more efficient cooling solutions for power hungry data center deployments.
Efficiency will be a key concern for data centers in the future. A new generation of processors capable of running powerful machine learning artificial intelligence and analytics programs bring with them massive energy demands and generate huge amounts of heat. From Google’s custom-built Tensor Processing Units (TPUs) to high performance CPUs accelerated by graphic processing unit (GPU) accelerators, the processing muscle that’s powering modern AI is already straining the power and cooling capacity of data center infrastructure. With more and more organizations implementing machine learning and even offering AI solutions as a service, these demands will surely increase.
Google found this out the hard way when they introduced their TPU 3.0 processors. While air cooling technology was sufficient to cool the first two generations, the latest iteration generated far too much heat to be cooled efficiently with anything short of direct-to-chip liquid cooling. Fortunately, the tech giant had been researching viable liquid cooling solutions for several years and was able to integrate the new system relatively quickly.
Even for data center infrastructures that aren’t facilitating AI-driven machine learning, server rack densities and storage densities are increasing rapidly. As workloads continue to grow and data centers look at replacing their older, less efficient cooling systems, they will have more of an incentive to consider liquid cooling solutions because it won’t mean maintaining two separate systems. Data centers are currently engaged in an arms race to increase rack density to provide more robust and comprehensive services to their clients. From smaller, modular facilities to massive hyperscale behemoths, efficiency and performance go hand in hand as each generation of server enables them to do more with less. If the power demands of these deployments continue to grow in the coming years, inefficient air cooling technology will no longer be a viable solution.
For many years, liquid cooling could not be used for storage drives because older hard disk drives (HDDs) utilized moving internal parts that could not be exposed to liquid and could not be sealed. With the rapid proliferation of solid state drives (SSDs) and the development of sealed HDDs filled with helium, immersion-based cooling solutions have become far more practical and reliable.
Liquid cooling systems can deliver comparable cooling performance to air-based systems with a much smaller footprint. This could potentially make them an ideal solution for edge data centers, which are typically smaller and will likely house more high-density hardware in the future. Designing these facilities from the ground up with liquid cooling solutions will allow them to pack more computing power in a smaller space, lending them the versatility companies are coming to expect from edge data centers.
While liquid cooling probably won’t replace air conditioning systems in every data center anytime soon, it is becoming an increasingly attractive solution for many facilities. For data centers with extremely high-density deployments or provide powerful machine learning services, liquid cooling could allow them to continue to expand their services thanks to its ability to deliver more effective and efficient cooling.