Data centers have become an indispensable part of modern computing infrastructures. With more and more organizations turning to them for colocation services, cloud solutions, and compliance assurances, it’s no surprise that the number of data centers is expected to grow significantly within the next two to five years. With so many new data centers on the horizon, it’s worth thinking about the energy demands of these facilities. The simple truth of the matter is that data centers consume a lot of power.
A whole lot of power.
In 2017, US based data centers alone used up more than 90 billion kilowatt-hours of electricity. To give some perspective on how much energy that amounts to, it would take 34 massive coal-powered plants generating 500 megawatts each to equal the power demands of those data centers. On a global scale, data centers utilized about 416 terawatts of power, or roughly 3% of all electricity generated on the planet. For context, the energy consumption of all the world’s data centers amounted to 40% more than all the energy consumed by the United Kingdom, an industrialized country with over 65 million people.
That’s a lot of power. And it’s only going to increase in the future as more facilities are built each year. With 80% of the world’s energy still being generated by fossil fuels, those ever-increasing power demands could become a problem. Fortunately, data center providers are working tirelessly to meet the needs of consumers while keeping their energy usage at reasonable levels.
On the plus side, these massive energy consumption figures are much better than past projections. Between 2005 and 2010, US data center energy usage grew by 24 percent. The previous five years were even worse, with energy usage increasing by nearly 90% from 2000 to 2005. But from 2010 to 2014, total data center energy consumption grew by a comparatively tiny 4%. Researchers expect that growth rate to hold steady at least through 2020.
Much of these gains are the result of efficiency improvements. The economies of scale offered by hyperscale data centers have pushed their Power Usage Effectiveness (PUE) scores lower than their smaller cousins, but smaller enterprise data centers also operate much more efficiently today than they did a decade ago. A 2005 Uptime Institute report found that many data centers were so badly organized that only 40% of cold air intended for server racks actually reached them despite the fact that the facilities had installed 2.6 times as much cooling capacity as they needed. Since that time, enterprise data centers have improved their efficiency by as much as 80% through the use of low-power chips and solid state drives rather than spinning hard drives.
Improvements in server technology, specifically server virtualization, has also delivered substantial efficiency gains. Today’s servers are not only more powerful and efficient, but better data management practices have made it possible to utilize more of each server’s total capacity. Where companies were purchasing 15% more servers each year than the previous one in the early 2000s, annual server shipment growth has held steady at 3% since 2010 and is not expected to grow before 2020.
Consolidation also played an important role in keeping power demands under relative control. With the rapid growth of cloud computing, organizations have increasingly abandoned private data centers and server closets in favor of colocation or on-demand services. Since most of these solutions ran on inefficient and energy-hungry legacy hardware, exporting their IT infrastructure to data centers actually proved to be a net positive in terms of efficiency.
Unfortunately, these efficiency improvements represent “low-hanging fruit” that has already been plucked. The easiest and most viable efficiency changes have long since been implemented, causing the overall efficiency trend to flatten in recent years. Google, for instance, boasts an impressive PUE of 1.12 across its data centers worldwide, which is only slightly off the theoretically perfect score of 1.0. While this score is an undeniably laudable achievement, it does little to address the overall energy consumption of data centers, which continues to increase every year.
It’s not yet clear what impact developments like Internet of Things devices and edge computing will have on power usage. Newly designed edge data centers will incorporate efficiency best practices, but since most IoT devices aren’t physically located in data centers, they often aren’t taken into consideration when measuring data center consumption.
Many data centers have made a commitment to sustainability by turning to sources of renewable energy. Although the current nature of renewable power in the US makes it difficult for data center providers to rely on it as a primary source of energy, there are a number of ways, such as the purchase of Renewable Energy Credits (RECs), it can be used to supplement energy needs to improve the overall carbon footprint of facilities.
There’s also good reason to be hopeful that unexpected technological solutions wait just over the horizon. Despite all the developments of the 21st century, many core principles of computing architecture have gone largely unchanged since their invention many decades ago. Processors, for example, have become smaller and more powerful, but they still operate according to the same principles as their bulkier and slower ancestors. Where their transistors were once much slower than the wires connecting them, today the opposite is true. Many experts believe we’ve only scratched the surface of what’s possible.
Although data centers will continue to consume more energy in the future, the twin trends of consolidation and efficiency practices have greatly reduced the overall impact of these facilities. Where data centers were once expected to push energy demands to unsustainable levels, developments over the last decade have created an opportunity to research and implement more long term solutions that will continue to allow data centers to serve the needs of the companies and consumers who depend upon their services.
Ali is the Senior Vice President of Engineering and Chief Technology Officer for vXchnge. Ali is responsible for all engineering, construction, network, and information technology functions for the company. Ali brings over 20 years of experience in the development and support of engineering and IT systems. Prior to joining vXchnge, Ali served as Vice President of IBX Ops Engineering for Equinix. There he led the design of data center architecture and all critical infrastructure and control systems for North America. Prior to joining Equinix, Ali served many executive positions for Switch & Data and Internap.