What You Should Know About Data Center Cooling Technologies

By: Kaylie Gyarmathy on January 17, 2020

The average data center uses quite a lot of electricity. That should come as no surprise considering the amount of computing power they manage to fit onto a single data floor, not to mention the cooling infrastructure required to maintain the ideal operating environment for all that equipment. Taken together, data centers consume about three percent of the world’s electricity. With more energy-intensive hyperscale facilities on the way in the coming years, power usage is likely to continue increasing despite improvements in efficiency.

Table of Contents

For colocation customers, understanding the power and cooling characteristics of their data center infrastructure is important because it helps them to better assess their potential costs and future computing needs. Data center power design, for instance, can have a major impact on how a company decides to grow its capacity. Fortunately, the power and cooling capabilities of a data center tend to be relatively easy to evaluate.

Data Center Power Demands

Assessing power requirements is one of the first tasks any organization must undertake when it decides to move assets into a data center. The power demands of equipment usually make up a sizable portion of colocation costs, and deploying powerful servers in high-density cabinets will be more expensive than a comparable number of less-impressive units. Regardless of the type of servers being used, they will also need power distribution units (PDUs) able to handle the amount of amperage they’re pulling while in use.

A data center’s electrical system should incorporate some level of redundancy that includes uninterrupted power supply (UPS) battery systems and a backup generator that can provide enough megawatts of power to keep the facility running if the main power is disrupted for any length of time. Should the power ever go out, the UPS systems will keep all computing equipment up and running long enough for the generator to come online. In many cases, data center power infrastructure incorporates more than one electrical feed running into the facility, which provides additional redundancy.

Colocation facilities also have clearly defined power specifications that indicate how much power they can supply to each cabinet. For high-density deployments, colocation customers need to find a data center infrastructure able to provide between 10 to 20 kW of power per cabinet. While a company with much lower power needs might not be concerned with these limits initially, they should always keep in mind that their power requirements could increase over time as they grow. Scaling operations within a data center environment with the power design to accommodate them is often preferable to the hassle of migrating to an entirely different facility.

Data Center Cooling Technology

While the power requirements of colocated equipment are a major factor in colocation costs, a data center’s cooling solutions are significant as well. The high costs of cooling infrastructure are often one of the leading reasons why companies abandon on-premises data solutions in favor of colocation services. Private data centers are often quite inefficient when it comes to their cooling systems. They also usually lack the site monitoring capabilities of colocation facilities, which makes it more difficult for them to fully optimize their infrastructure to reduce cooling demands.

With power densities increasing rapidly, many companies are investing heavily in new data center cooling technologies to ensure that they’ll be able to harness the computing power of the next generation of processors. Larger tech companies like Google are even leveraging the power of artificial intelligence to improve cooling efficiency. And previously farfetched solutions like liquid server cooling systems are quickly becoming commonplace as companies experiment with innovative ways to cool a new generation of high-performance processors.

One of the most important innovations in data center infrastructure management in recent years has been the application of predictive analytics. Today’s data centers generate massive amounts of information about their power and cooling demands. The most efficient facilities have harnessed that data to model trends and usage patterns, allowing them to better manage their data center power and cooling needs. Exciting new business intelligence tools like vXchnge’s award-winning in\site platform even allow colocation customers to monitor network and server performance in real-time. By cycling servers down during low-traffic periods and anticipating when power and cooling needs will be highest, data centers have been able to significantly improve their efficiency scores.

Data Center Cooling Technologies Defined

Given the importance of data center cooling infrastructure, it’s worth taking a moment to examine some commonly used and new data center cooling technologies.

Calibrated Vectored Cooling (CVC) 

A form of data center cooling technology designed specifically for high-density servers. It optimizes the airflow path through equipment to allow the cooling system to manage heat more effectively, making it possible to increase the ratio of circuit boards per server chassis and utilize fewer fans.

Chilled Water System 

A data center cooling system commonly used in mid-to-large-sized data centers that uses chilled water to cool air being brought in by air handlers (CRAHs). Water is supplied by a chiller plant located somewhere in the facility.

Cold Aisle/Hot Aisle Design 

A common form of data center server rack deployment that uses alternating rows of “cold aisles” and “hot aisles.” The cold aisles feature cold air intakes on the front of the racks, while the hot aisles consist of the hot air exhausts on the back of the racks. Hot aisles expel hot air into the air conditioning intakes to be chilled and then vented into the cold aisles. Empty racks are filled by blanking panels to prevent overheating or wasted cold air.

Computer Room Air Conditioner (CRAC)

One of the most common features of any data center, CRAC units are very similar to conventional air conditioners powered by a compressor that draws air across a refrigerant-filled cooling unit. They are quite inefficient in terms of energy usage, but the equipment itself is relatively inexpensive.

Computer Room Air Handler (CRAH) 

A CRAH unit functions as part of a broader system involving a chilled water plant (or chiller) somewhere in the facility. Chilled water flows through a cooling coil inside the unit, which then uses modulating fans to draw air from outside the facility. Because they function by chilling outside air, CRAH units are much more efficient when used in locations with colder annual temperatures.

Critical Cooling Load

This measurement represents the total usable cooling capacity (usually expressed in watts of power) on the data center floor for the purposes of cooling servers.

Direct-to-Chip Cooling

A data center liquid cooling method that uses pipes to deliver coolant directly to a cold plate that is incorporated into a motherboard’s processors to disperse heat. Extracted heat is fed into a chilled-water loop and carried away to a facility’s chiller plant. Since this system cools processors directly, it’s one of the most effective forms of server cooling.

Evaporative Cooling

Manages temperature by exposing hot air to water, which causes the water to evaporate and draw the heat out of the air. The water can be introduced either in the form of a misting system or a wet material such as a filter or mat. While this system is very energy efficient since it doesn’t use CRAC or CRAH units, it does require a lot of water. Data center cooling towers are often used to facilitate evaporations and transfer excess heat to the outside atmosphere.

Free Cooling

Any data center cooling system that uses the outside atmosphere to introduce cooler air into the servers rather than continually chilling the same air. While this can only be implemented in certain climates, it’s a very energy-efficient form of server cooling.

Immersion System

An innovative new data center liquid cooling solution that submerges hardware into a bath of non-conductive, non-flammable dielectric fluid.

Liquid Cooling

Any cooling technology that uses liquid to evacuate heat from the air. Increasingly, data center liquid cooling refers to specifically direct cooling solutions that expose server components (such as processors) to liquid to cool them more efficiently.

Raised Floor

A frame that lifts the data center floor above the building’s concrete slab floor. The space between the two is used for water-cooling pipes or increased airflow. While power and network cables are sometimes run through this space as well, newer data center cooling design and best practices place these cables overhead.

Data Center Cooling Market Growth Projections

Recent market analyses indicate an active and ongoing interest in the data center cooling market. Projections from ResearchandMarkets.com suggest it’ll be worth $8 billion by 2023. That estimate represents a six percent combined annual growth rate (CAGR). 

Moreover, different but related findings from Market Study Report published in December 2017 specifically focus on new data center cooling technologies and the market worth associated with them. The study indicates even more substantial growth related to the technology aspect but looks at the likely increase in value through 2024, not 2023 as the previous research did. It reveals a projected CAGR of 12 percent and a total worth of $20 billion.

One of the reasons cited for the uptick in the general data center cooling market is the trend of data centers built in developing countries or regions such as Singapore and Latin America. Analysts believe as data centers start operating in those places, there will be a continual emphasis on running the facilities as efficiently as possible. That reality naturally spurs the likelihood that data center owners and managers will look for innovative options, thereby increasing the worth of innovative new data center cooling technologies, such as liquid cooling.

Data Centers and Liquid Cooling Technology

Although air cooling technology has gone through many improvements over the years, it is still limited by a number of fundamental problems. Aside from the high energy costs, air conditioning systems take up a lot of valuable space, introduce moisture into sealed environments, and are rather notorious for mechanical failures. Until recently, however, data centers had no other options for meeting their cooling needs. With new developments in liquid cooling, many data centers are beginning to experiment with new methods for solving their ongoing heat problems.

What is Liquid Cooling?

While early incarnations of liquid cooling systems were complicated, messy, and very expensive, the latest generation of the technology provides a more efficient and effective cooling solution. Unlike air cooling, which requires a great deal of power and introduces both pollutants and condensation into the data center, liquid cooling is cleaner and more targeted and scalable. The two most common cooling designs are full immersion cooling and direct-to-chip cooling.

Immersion systems involve submerging the hardware itself into a bath of non-conductive, non-flammable dielectric fluid. Both the fluid and the hardware are contained inside a leak-proof case. Dielectric fluid absorbs heat far more efficiently than air, and as heated water turns to vapor, it condenses and falls back into the fluid to aid in cooling. Direct-to-chip cooling uses pipes that deliver liquid coolant directly into a cold plate that sits atop a motherboard’s processors to draw off heat. The extracted heat is then fed into a chilled-water loop to be transported back to the facility’s cooling plant and expelled into the outside atmosphere. Both methods offer far more efficient cooling solutions for power hungry data center deployments.

AI Energy Demands

Efficiency will be a key concern for data centers in the future. A new generation of processors capable of running powerful machine learning artificial intelligence and analytics programs bring with them massive energy demands and generate huge amounts of heat. From Google’s custom-built Tensor Processing Units (TPUs) to high-performance CPUs accelerated by graphics processing unit (GPU) accelerators, the processing muscle that’s powering modern AI is already straining the power and cooling capacity of data center infrastructure. With more and more organizations implementing machine learning and even offering AI solutions as a service, these demands will surely increase.

Google found this out the hard way when the company introduced its TPU 3.0 processors. While air cooling technology was sufficient to cool the first two generations, the latest iteration generated far too much heat to be cooled efficiently with anything short of direct-to-chip liquid cooling. Fortunately, the tech giant had been researching viable liquid cooling solutions for several years and was able to integrate the new system relatively quickly.

High-Density Server Demands

Even for data center infrastructures that aren’t facilitating AI-driven machine learning, server rack densities and storage densities are increasing rapidly. As workloads continue to grow and providers look at replacing their older, less efficient data center cooling systems, they will have more of an incentive to consider liquid cooling solutions because it won’t mean maintaining two separate systems. Data centers are currently engaged in an arms race to increase rack density to provide more robust and comprehensive services to their clients. From smaller, modular facilities to massive hyperscale behemoths, efficiency and performance go hand in hand as each generation of server enables them to do more with less. If the power demands of these deployments continue to grow in the coming years, inefficient air cooling technology will no longer be a viable solution.

For many years, liquid cooling could not be used for storage drives because older hard disk drives (HDDs) utilized moving internal parts that could not be exposed to liquid and could not be sealed. With the rapid proliferation of solid-state drives (SSDs), the development of sealed HDDs filled with helium, and innovative new storage technology like 5d memory crystals, immersion-based cooling solutions have become far more practical and reliable.

Edge Computing Deployments

Liquid cooling systems can deliver comparable cooling performance to air-based systems with a much smaller footprint. This could potentially make them an ideal solution for edge data centers, which are typically smaller and will likely house more high-density hardware in the future. Designing these facilities from the ground up with liquid cooling solutions will allow them to pack more computing power in a smaller space, lending them the versatility companies are coming to expect from edge data centers.

While liquid cooling probably won’t replace air conditioning systems in every data center anytime soon, it is becoming an increasingly attractive solution for many facilities. For data centers with extremely high-density deployments or provide powerful machine learning services, liquid cooling could allow them to continue to expand their services thanks to its ability to deliver more effective and efficient cooling.

6 Big Mistakes in Data Center Cooling

Since efficiency is so critical to controlling data center cooling costs and delivering consistent uptime service, here are six big mistakes data center managers need to avoid.

1. Bad Cabinet Layout

A good cabinet layout should include hot-aisle and cold-aisle data center cooling design with your computer room air handlers at the end of each row. Using an island configuration without a well-designed orientation is very inefficient.

2. Empty Cabinets

Empty cabinets can skew airflow allowing hot exhaust air to leak back into your cold aisle. If you have empty cabinets, make sure your cold air is contained.

3. Empty Spaces Between Equipment

How many times have you seen cabinets with empty, uncovered spaces between the hardware? These empty spaces can ruin your airflow management. If the spaces in the cabinet are not sealed, hot air can leak back into your cold aisle. A conscientious operator will make sure the spaces are sealed.

4. Raised Floor Leaks

Raised floor leaks occur when cold air leaks under your raised floor and into support columns or adjacent spaces. These leaks can cause a loss of pressure, which can allow dust, humidity, and warm air to enter your cold aisle environment. In order to resolve phantom leaks, someone will need to do a full inspection of the support columns and perimeter and seal any leaks they find.

5. Leaks Around Cable Openings

There are many openings in floors and cabinets for cable management. While they are under the raised floor inspecting for leaks, they should also look for unsealed cable openings, holes under remote power panels, and power distribution units. If left open, these holes can let cold air escape.

6. Multiple Air Handlers Fighting to Control Humidity

What happens when one air handler tries to dehumidify the air while another unit tries to humidify the same air? The result can be a lot of wasted energy while the two units fight for control. By thoroughly planning your humidity control points, you can reduce the risk of this occurring.

4 Keys to an Optimal Data Center Cooling System

keys-to-an-optimal-data-center-cooling-system

For most businesses, the first step in optimizing their data center cooling system is understanding how data center cooling works – or, more importantly, how data center cooling should work for their specific business and technical requirements.

Here are four best practices for ensuring optimal data center cooling:

1. Use Hot Aisle/Cold Aisle Design

This data center cooling design lines server racks in alternating rows to create "hot aisles" (consisting of the hot air exhaust on the back of the racks) and "cold aisles" (consisting of cold air intakes on the front of the racks). The idea is for the hot aisle to expel hot air into the air conditioning intakes where it is chilled and pushed through the air conditioning vent to be recirculated into the cold aisle.

2. Implement Containment Measures

Since air has a stubborn tendency to move wherever it wants, modern data centers implement further containment measures by installing walls and doors to direct air flow, keeping the cold air in the cold aisles and hot air in the hot aisles. Efficient containment allows data centers to run higher rack densities while reducing energy consumption.

3. Inspect & Seal Leaks in Perimeters, Support Columns, and Cable Openings

Water damage presents a huge problem to data centers and is the second leading cause of data loss and downtime behind electrical fires. Since water damage is often not covered by business insurance policies (and even when it is, there's no way to replace lost data), data centers cannot afford to ignore the threat. Fortunately, most leaks are easily detected and preventable with a bit of forethought and caution. Tools such as fluid and chemical sensing cables, zone controllers, and humidity sensors can spot leaks before they become a serious problem.

4. Synchronize Humidity Control Points

Many data centers utilize air-side economizer systems, or "free air-side cooling." These systems introduce outside air into the data center to improve energy efficiency; unfortunately, they can also allow moisture inside. Too much moisture in the air can lead to condensation, which will eventually corrode and short out electrical systems. Adjusting the climate controls to reduce moisture may seem like a solution, but that can lead to problems as well. If the air becomes too dry, static electricity can build up, which can also cause equipment damage. It's imperative, therefore, that a data center's humidity controls account for moisture coming in with the outside air to maintain an ideal environment for the server rooms.

Getting the Most Out of Data Center Cooling

As power demands continue to increase, new cooling data center cooling technologies will be needed to keep facilities operating at peak capacity. Evaluating a data center’s power and cooling capabilities is critical for colocation customers. 

By identifying facilities with a solid data center infrastructure in place that can drive efficiencies and improve performance, colocation customers can make better long-term decisions about their own infrastructure. Given the difficulties associated with migrating assets and data, finding a data center partner with the power and cooling capacity to accommodate both present and future needs can provide a strategic advantage for a growing organization.

Speak to an Expert About Your Company's Specific Data Center Needs