Data centers have become an indispensable part of modern computing infrastructures. With more and more organizations turning to them for colocation services, cloud solutions, and compliance assurances, it’s no surprise that the number of data centers is expected to grow significantly within the next two to five years.
With so many new data centers on the horizon, it’s worth thinking about the harsh realities of data center power consumption. Even with innovative developments in sustainable energy solutions, the truth of the matter is that both small and large data centers consume a LOT of power.
In 2017, US based data centers alone used up more than 90 billion kilowatt-hours of electricity. To give some perspective on how much energy that amounts to, it would take 34 massive coal-powered plants generating 500 megawatts each to equal the power demands of those data centers. On a global scale, data center power consumption amounted to about 416 terawatts, or roughly three percent of all electricity generated on the planet. For context, data center energy consumption around the world amounted to 40 percent more than all the energy consumed by the United Kingdom, an industrialized country with over 65 million people.
That’s a lot of power. And it’s only going to increase in the future as more facilities are built each year. With 80 percent of the world’s energy still being generated by fossil fuels, those ever-increasing power demands could become a problem. Fortunately, data center providers are working tirelessly to meet the needs of consumers while keeping their energy usage at reasonable levels.
On the plus side, these massive data center energy consumption statistics are much better than past projections. Between 2005 and 2010, US data center energy usage grew by 24 percent. The previous five years were even worse, with energy usage increasing by nearly 90 percent from 2000 to 2005. But from 2010 to 2014, total data center energy consumption grew by a comparatively tiny four percent. Researchers expect that growth rate to hold steady at least through 2020.
Much of these gains are the result of efficiency improvements. The economies of scale offered by hyperscale data centers have pushed their Power Usage Effectiveness (PUE) scores lower than their smaller cousins, but smaller enterprise data centers also operate much more efficiently today than they did a decade ago. A 2005 Uptime Institute report found that many data centers were so badly organized that only 40 percent of cold air intended for server racks actually reached them despite the fact that the facilities had installed 2.6 times as much cooling capacity as they needed. Since that time, data center power consumption has improved by as much as 80 percent through the use of low-power chips and solid state drives rather than spinning hard drives.
Improvements in server technology, specifically server virtualization, has also delivered substantial improvements in data center power consumption while also reducing data center cost. Today’s servers are not only more powerful and efficient, but better data management practices have made it possible to utilize more of each server’s total capacity. Considering that the move to large data centers capable of leveraging sustainable energy solutions has caused a massive spike in server spending, it’s reassuring to know that facilities will be getting everything they can out of this hardware.
Consolidation also played an important role in keeping data center power usage under relative control. With the rapid growth of cloud computing, organizations have increasingly abandoned private data centers and server closets in favor of colocation or on-demand services. Since most of these solutions ran on inefficient and energy-hungry legacy hardware, exporting their IT infrastructure to data centers actually proved to be a net positive in terms of efficiency.
Unfortunately, these efficiency improvements represent “low-hanging fruit” that has already been plucked. The easiest and most viable efficiency changes have long since been implemented, causing the overall efficiency trend to flatten in recent years. Google, for instance, boasts an impressive PUE of 1.11 across its data centers worldwide, which is only slightly off the theoretically perfect score of 1.0. While this score is an undeniably laudable achievement, it does little to address overall data center power consumption, which continues to increase every year.
It’s not yet clear what impact developments like Internet of Things (IoT) devices and edge computing will have on power usage. Newly designed edge data centers will incorporate efficiency best practices, but since most IoT devices aren’t physically located in data centers, they often aren’t taken into consideration when measuring data center consumption.
Many data centers have made a commitment to sustainable energy solutions by turning to sources of renewable power. Although the current nature of renewable power in the US makes it difficult for data center providers to rely on it as a primary source of energy, there are a number of ways, such as the purchase of Renewable Energy Credits (RECs), it can be used to supplement energy needs to improve the overall carbon footprint of facilities.
There’s also good reason to be hopeful that unexpected technological solutions wait just over the horizon. Despite all the developments of the 21st century, many core principles of computing architecture have gone largely unchanged since their invention many decades ago. Processors, for example, have become smaller and more powerful, but they still operate according to the same principles as their bulkier and slower ancestors. Where their transistors were once much slower than the wires connecting them, today the opposite is true. Many experts believe we’ve only scratched the surface of what’s possible.
By nature, data centers are tooled for maximum reliability, and that means optimal performance, power and capabilities. It’s a configuration that breeds excess, albeit necessary excess, but all the same.
Anything in excess is essentially waste — wasted resources, wasted capital, and wasted potential. The wastefulness is exacerbated even more when finite resources are factored into the equation, such as power or additional space.
What this means is that any one organization or service provider can easily outgrow its current configuration, particularly the assets available within the local data center. It encourages constant growth and expansion, which is something that might not be necessary if and when efficient operations are applied.
As a way to cut down on data center power consumption, it makes sense to improve efficiencies across the board, and there are certainly ways to do that in the modern data center.
It’s no secret that data center equipment and servers put off a lot of heat, which means a large portion of expenditures come from cooling and air conditioning. The equipment must remain at a safe temperature, which calls for proper ventilation and cooling in the housing room.
That power consumption can be lessened by optimizing not just the cooling operations, but also the space where the equipment is housed. Proper insulation, for example, can help maintain temperatures within the room. Strategic layouts of equipment and streamlined airflow can also improve cooling efficiencies.
Some additional measures managers can take include the following:
Because cooling is so important, most data center managers are unwilling to experiment to find more efficient temperatures. In reality, lowering maintained temps by even a couple degrees can save hundreds — if not thousands in data center costs. It cuts down on data center power consumption and has a minimal effect on performance.
Spend some time monitoring temperature changes to find a level that works, but also allows for a boost in savings.
Even newer or refreshed configurations tend to waste power and resources when demand is low. It makes more sense to match up the server capacity — or at least active hardware — to meet demands in real-time. With proper planning and the help of monitoring and management tools, it is possible to match-up these two elements to create a more streamlined system.
Thanks to a fast-moving business with constantly shifting operations, staff and processes, certain assets are either overlooked or forgotten now and then. This leads to something called a zombie server, a system that is no longer used yet remains powered on and consuming energy. Research shows that 25 percent of physical servers and 30 percent of virtual servers are comatose, or zombies.
Generally, they are not shut down because there’s no paper trail about what they contain or what they’re used for, meaning managers are afraid to hit the killswitch.
To deal with this problem properly, everything must be documented appropriately, and monitoring tools must be put in place to offer direct oversight as to what servers or configurations are mission-critical.
Before server virtualization was possible, it was critical to outfit additional space with more servers to keep up with power and load demands. That is no longer the case — in fact, it may be more beneficial to downsize favoring optimized use of the space available instead.
Focusing on a modular design that can be scaled up or down to meet company needs is a great idea for maintaining proper efficiency levels and controlling data center power usage.
In either case, excess space can balloon costs — especially when it must be factored into cooling and air control.
Every business or enterprise has a supplier of some kind. Data centers generally source energy from one or many suppliers, which is a contention point for elevated costs. By striking up more beneficial relationships or partnerships, those costs can be mitigated.
Furthermore, simply finding a good energy supplier can net you more savings through good communication. The said supplier can help you improve energy usage, source fuel or energy at cost, and even cut down on the time invested in such matters. The ideal energy supplier will help you better manage time because you can focus on other, more important matters like budgets, inventory, negotiations, and more.
By optimizing certain operations and processes — temperature control and cooling primarily — it’s possible to reduce data center power usage and reap cost savings in the process. In some circles, this is almost unheard of, as data centers tend to be a powerhouse of consumption and excess. They demand huge loads of energy and must remain online at all times of the day and night, which requires incredible levels of reliability and performance.
Although data center power consumption will continue to be an issue in the future, the twin trends of consolidation and efficiency practices have greatly reduced the overall impact of these facilities. Where data centers were once expected to push energy demands to unsustainable levels, developments in data center energy efficiency over the last decade have created an opportunity to research and implement more long term solutions that will continue to allow data centers to serve the needs of the companies and consumers who depend upon their services.