Tom Banta

By: Tom Banta on September 20th, 2019

Print/Save as PDF

Why Hyperscale Data Centers Are Here to Stay

Industry Trends | Data Center

By the end of 2017, more than 390 of the world’s data centers were big enough to be classified as hyperscale data centers. Operated primarily by the largest international cloud platforms, such as Google, Amazon, and Microsoft, an overwhelming number of these facilities are located in the United States (44 percent), with China coming in a distant second place (eight percent). The total number of hyperscale facilities will exceed 500 by the end of 2019, and Cisco even estimates that 53 percent of all data center traffic will pass through hyperscale facilities by 2020.

Hyperscale Data Center Definition

As with many new concepts in the IT world, there’s no agreed-upon definition for what makes a data center hyperscale. The International Data Corporation (IDC), which provides research and advisory services to the tech industry, classifies any data center with at least 5,000 servers and 10,000 square feet of available space as hyperscale, but Synergy Research Group focuses less on physical characteristics and more on “scale-of-business criteria” that assess a company’s cloud, e-commerce, and social media operations.

While there may not be agreement upon the term, most people seem able to identify a hyperscale facility when they see one. Take, for instance, Chicago’s Lakeside Technology Center, a massive 1.1 million sq. ft. data center, Microsoft’s twin 470,000 sq. ft. complexes in Quincy, WA and San Antonio, TX, or the National Security Administration’s 100,000 sq. ft. “Bumblehive” data center facility in Bluffdale, UT. These are all fairly obvious examples of hyperscale facilities, but these data centers aren’t simply big for the sake of being big.

Benefits of Hyperscale Data Centers

While data storage is important for modern data centers, what really sets hyperscale facilities apart is their high-volume traffic and ability to handle heavy computing workloads. For organizations that run most of their operations in the cloud, power-intensive tasks like 3D rendering, genome processing, virtual reality workloads, cryptography, and big data analytics require a lot of computing resources.

Traditionally, data centers have had two methods of providing this extra processing power. The first method, horizontal scaling, has been partially responsible for the “bigger is better” image of hyperscale facilities. Horizontal scaling increases available processing power by adding more machines to the infrastructure.

This strategy, however, is not very energy efficient, especially for complex workloads. It also creates an additional problem in that every storage unit added requires the corresponding compute and network resource needed to utilize them to be included as well.

Since the early 2000s, however, most data centers have opted to utilize vertical scaling to increase their computing capacity. Innovations in processor architecture, such as graphics processing units (GPUs), many integrated cores (MICs), and field-programmable gate arrays/data flow engines (FPGSs/DFEs), made it possible to build platforms that were both more efficient and effective. These newer servers had enough power to handle the high-capacity demands of modern cloud-based computing.

Hyperscale data centers have managed to provide the best of both worlds, expanding their physical footprint with the very latest in high-performance computing infrastructure.

Challenges of Hyperscale Data Centers

Managing Hyperscale Computing Data

While hyperscale data centers provide serious advantages for organizations that need huge amounts of computing power from their cloud providers, these facilities do present a major challenge in terms of transparency. The IT topology of a typical (if such a thing exists) hyperscale infrastructure is so massive that the traditional methods for monitoring traffic flow are woefully insufficient because they simply don’t work or aren’t cost effective at the hyperscale level.

The sheer volume of data traffic can be overwhelming, especially since the typical speed of connections is only increasing. Ensuring the resiliency of so many connections in the event of failure or performance anomalies is another major challenge. With more and more companies deploying Internet of Things (IoT) devices, the total amount of data flowing back to these facilities will certainly continue to grow.

Although edge computing architectures can relieve some of this IoT-induced pressure, companies are also doubling down on the kind of computing-heavy big data analytics hyperscale facilities make possible. Finding an effective strategy for managing these diverse resource demands will certainly be a challenge for IT professionals.

Power Demands of the Hyperscale Model

The combination of horizontal and vertical scaling results in a massive increase in the amount of power consumed by hyperscale data centers. Although these facilities are often very efficient in terms of energy use, their sheer size places enormous power demands upon global energy resources. Hyperscale energy consumption is expected to nearly triple from 2015 to 2023, when it will represent the largest share of data center energy consumption in the world.

That impact will be mitigated somewhat by the decline of inefficient private data centers. Most hyperscale facilities are operated by major technology companies that are pushing the boundaries of what is possible with cooling infrastructure and AI-assisted climate controls.

What Hyperscale Data Centers Mean for the Future of IT

As organizations continue to make the shift from private server solutions to cloud-based applications, hyperscale data centers will play a major role in their future IT operations. The world’s largest cloud providers are already expanding their hyperscale infrastructure to meet the demands of these customers. While it remains to be seen if there is an upper limit on the size and scope of these facilities, for the time being, they show no signs of slowing.

The increasing demand for cloud computing infrastructure and IoT functionality will continue to drive the need for data centers of all kinds. Hyperscale facilities offer a unique combination of energy efficiency and functionality that will likely make them the preferred choice of facility for most companies.

 
Speak to an Expert

About Tom Banta

Tom is the Senior Vice President of Product Management & Development at vXchnge. Tom is responsible for the company’s product strategy and development.

  • Connect with Tom Banta on: