The average data center uses quite a lot of electricity. That should come as no surprise considering the amount of computing power they manage to fit onto a single data floor, not to mention the cooling infrastructure required to maintain the ideal operating environment for all that equipment. Taken together, data centers consume about three percent of the world’s electricity. With more energy-intensive hyperscale facilities on the way in the coming years, power usage is likely to continue increasing despite improvements in efficiency.
For colocation customers, understanding the power and cooling characteristics of their data center infrastructure is important because it helps them to better assess their potential costs and future computing needs. Data center power design, for instance, can have a major impact on how a company decides to grow its capacity. Fortunately, the power and cooling capabilities of a data center tends to be relatively easy to evaluate.
Assessing power requirements is one of the first tasks any organization must undertake when it decides to move assets into a data center. The power demands of equipment usually make up a sizable portion of colocation costs, and deploying powerful servers in high-density cabinets will be more expensive than a comparable number of less-impressive units. Regardless of the type of servers being used, they will need also need power distribution units (PDUs) able to handle the amount of amperage they’re pulling while in use.
A data center’s electrical system should incorporate some level of redundancy that includes uninterrupted power supply (UPS) battery systems and a backup generator that can provide enough megawatts of power to keep the facility running if the main power is disrupted for any length of time. Should the power ever go out, the UPS systems will keep all computing equipment up and running long enough for the generator to come online. In many cases, data center power infrastructure incorporates more than one electrical feed running into the facility, which provides additional redundancy.
Colocation facilities also have clearly defined power specifications that indicate how much power they can supply to each cabinet. For high-density deployments, colocation customers need to find a data center infrastructure able to provide between 10 to 20 kW of power per cabinet. While a company with much lower power needs might not be concerned with these limits initially, they should always keep in mind that their power requirements could increase over time as they grow. Scaling operations within a data center environment with the power design to accommodate them is often preferable to the hassle of migrating to an entirely different facility.
While the power requirements of colocated equipment are a major factor in colocation costs, a data center’s cooling solutions are significant as well. The high costs of cooling infrastructure are often one of the leading reasons why companies abandon on-premises data solutions in favor of colocation services. Private data centers are often quite inefficient when it comes to their cooling systems. They also usually lack the site monitoring capabilities of colocation facilities, which makes it more difficult for them to fully optimize their infrastructure to reduce cooling demands.
There are a number of innovative cooling technologies being used in state-of-the-art data centers around the world, such as direct-to-chip liquid cooling and AI-managed infrastructure. Most facilities, however, are still using traditional data center cooling solutions, albeit more efficiently, to manage heat generated by computing equipment. Among these data center cooling methods, two approaches stand out as the most common.
One of the most important innovations in data center infrastructure management in recent years has been the application of predictive analytics. Today’s data centers generate massive amounts of information about their power and cooling demands. The most efficient facilities have harnessed that data to model trends and usage patterns, allowing them to better manage their data center power and cooling needs. Exciting new software monitoring tools like vXchnge’s award-winning in\site platform even allow colocation customers to monitor network and server performance in real time. By cycling servers down during low-traffic periods and anticipating when power and cooling needs will be highest, data centers have been able to significantly improve their efficiency scores.
Evaluating a data center’s power and cooling capabilities is critical for colocation customers. By identifying facilities with a solid data center infrastructure in place that can drive efficiencies and improve performance, colocation customers can make better long-term decisions about their own infrastructure. Given the difficulties associated with migrating assets and data, finding a data center partner with the power and cooling capacity to accommodate both present and future needs can provide a strategic advantage for a growing organization.
As the Marketing Manager for vXchnge, Kaylie handles the coordination and logistics of tradeshows and events. She is responsible for social media marketing and brand promotion through various outlets. She enjoys developing new ways and events to capture the attention of the vXchnge audience.