Kaylie Gyarmathy

By: Kaylie Gyarmathy on July 9th, 2019

Print/Save as PDF

What You Need to Know About Power and Cooling in Your Data Center

Data Center Infrastructure

Subscribe to vXchnge Blog

The average data center uses quite a lot of electricity. That should come as no surprise considering the amount of computing power they manage to fit onto a single data floor, not to mention the cooling infrastructure required to maintain the ideal operating environment for all that equipment. Taken together, data centers consume about three percent of the world’s electricity. With more energy-intensive hyperscale facilities on the way in the coming years, power usage is likely to continue increasing despite improvements in efficiency.

For colocation customers, understanding the power and cooling characteristics of their data center infrastructure is important because it helps them to better assess their potential costs and future computing needs. Data center power design, for instance, can have a major impact on how a company decides to grow its capacity. Fortunately, the power and cooling capabilities of a data center tends to be relatively easy to evaluate.

Data Center Power

Assessing power requirements is one of the first tasks any organization must undertake when it decides to move assets into a data center. The power demands of equipment usually make up a sizable portion of colocation costs, and deploying powerful servers in high-density cabinets will be more expensive than a comparable number of less-impressive units. Regardless of the type of servers being used, they will need also need power distribution units (PDUs) able to handle the amount of amperage they’re pulling while in use.

A data center’s electrical system should incorporate some level of redundancy that includes uninterrupted power supply (UPS) battery systems and a backup generator that can provide enough megawatts of power to keep the facility running if the main power is disrupted for any length of time. Should the power ever go out, the UPS systems will keep all computing equipment up and running long enough for the generator to come online. In many cases, data center power infrastructure incorporates more than one electrical feed running into the facility, which provides additional redundancy.

Colocation facilities also have clearly defined power specifications that indicate how much power they can supply to each cabinet. For high-density deployments, colocation customers need to find a data center infrastructure able to provide between 10 to 20 kW of power per cabinet. While a company with much lower power needs might not be concerned with these limits initially, they should always keep in mind that their power requirements could increase over time as they grow. Scaling operations within a data center environment with the power design to accommodate them is often preferable to the hassle of migrating to an entirely different facility.

Data Center Cooling

While the power requirements of colocated equipment are a major factor in colocation costs, a data center’s cooling solutions are significant as well. The high costs of cooling infrastructure are often one of the leading reasons why companies abandon on-premises data solutions in favor of colocation services. Private data centers are often quite inefficient when it comes to their cooling systems. They also usually lack the site monitoring capabilities of colocation facilities, which makes it more difficult for them to fully optimize their infrastructure to reduce cooling demands.

There are a number of innovative cooling technologies being used in state-of-the-art data centers around the world, such as direct-to-chip liquid cooling and AI-managed infrastructure. Most facilities, however, are still using traditional data center cooling solutions, albeit more efficiently, to manage heat generated by computing equipment. Among these data center cooling methods, two approaches stand out as the most common.

  • Cold Aisle/Hot Aisle Design: A tried and true data center cooling solution, this form of server rack deployment alternates “cold aisles” and “hot aisles” on the data floor. The arrangement usually works in conjunction with a traditional computer room air conditioner (CRAC) system, which channels chilled air into the room. Vents expel cold air into the cold aisles, where the server rack’s intakes can pull it in to keep the hardware running cool. The servers then expel hot air into the hot aisles, where the air is sucked through the air conditioning intakes so it can eventually be chilled and vented back into the cold aisles.
  • Raised Floor Design: Another common form of data center cooling, raised floors use down-flow air distribution to keep servers from generating too much heat. Like cold aisle/hot aisle design, this approach also relies upon a CRAC system to provide cooling. This approach is more common in smaller facilities that don’t have to meet the intensive cooling demands of high-density cabinet deployments. Colocation customers should be sure to verify that the data center cooling infrastructure has at least N+1 built-in redundancies to reduce the risk of a cooling malfunction leading to costly system downtime.

The Impact of Analytics on Data Center Power and Cooling

One of the most important innovations in data center infrastructure management in recent years has been the application of predictive analytics. Today’s data centers generate massive amounts of information about their power and cooling demands. The most efficient facilities have harnessed that data to model trends and usage patterns, allowing them to better manage their data center power and cooling needs. Exciting new software monitoring tools like vXchnge’s award-winning in\site platform even allow colocation customers to monitor network and server performance in real time. By cycling servers down during low-traffic periods and anticipating when power and cooling needs will be highest, data centers have been able to significantly improve their efficiency scores.

Evaluating a data center’s power and cooling capabilities is critical for colocation customers. By identifying facilities with a solid data center infrastructure in place that can drive efficiencies and improve performance, colocation customers can make better long-term decisions about their own infrastructure. Given the difficulties associated with migrating assets and data, finding a data center partner with the power and cooling capacity to accommodate both present and future needs can provide a strategic advantage for a growing organization.

 
Speak to an Expert

About Kaylie Gyarmathy

As the Marketing Manager for vXchnge, Kaylie handles the coordination and logistics of tradeshows and events. She is responsible for social media marketing and brand promotion through various outlets. She enjoys developing new ways and events to capture the attention of the vXchnge audience.

  • Connect with Kaylie Gyarmathy on: