Is Bigger Better? The Rise of Hyperscale Data Centers Blog Feature
Tom Banta

By: Tom Banta on August 22nd, 2018

Print/Save as PDF

Is Bigger Better? The Rise of Hyperscale Data Centers

data center trends | data center

Subscribe to vXchnge Blog

By the end of 2017, more than 390 of the world’s data centers were big enough to be classified as hyperscale data centers.

Operated primarily by the largest international cloud platforms, such as Google, Amazon, and Microsoft, an overwhelming number of these facilities are located in the United States (44%), with China coming in a distant second place (8%).

By 2019, analysts predict that the total number of hyperscale facilities will exceed 500. Cisco even estimates that 53% of all data center traffic will pass through hyperscale facilities by 2020.

What Makes a Data Center Hyperscale?

As with many new concepts in the IT world, there’s no agreed upon definition for what makes a data center hyperscale. The International Data Corporation (IDC), which provides research and advisory services to the tech industry, classifies any data center with at least 5,000 servers and 10,000 square feet of available space as hyperscale, but Synergy Research Group focuses less on physical characteristics and more on “scale-of-business criteria” that assess a company’s cloud, e-commerce, and social media operations.

While there may not be agreement upon the term, most people seem able to identify a hyperscale facility when they see one.

Take, for instance, Chicago’s Lakeside Technology Center, a massive 1.1 million sq ft cult-tenant data center hub, Microsoft’s twin 470,000 sq ft complexes in Quincy, WA and San Antonio, TX, or the National Security Administration’s 100,000 sq ft “Bumblehive” data center facility in Bluffdale, UT. These are all fairly obvious examples of hyperscale facilities, but these data centers aren’t simply big for the sake of being big.

It’s Not (Just) About Size

While data storage is important for modern data centers, what really sets hyperscale facilities apart is their high-volume traffic and ability to handle heavy computing workloads. For organizations that run most of their operations in the cloud, power-intensive tasks like 3D rendering, genome processing, cryptography, and “big data” analytics require a lot of computing resources.

Traditionally, data centers have had two methods of providing this extra processing power. The first method, horizontal scaling, has been partially responsible for the “bigger is better” image of hyperscale facilities. Horizontal scaling increases available processing power by adding more machines to the infrastructure.

This strategy, however, is not very energy efficient, especially for complex workloads. It also creates an additional problem in that every storage unit added requires the corresponding compute and network resource needed to utilize them to be included as well.

Since the early 2000s, however, most data centers have opted to utilize vertical scaling to increase their computing capacity. Innovations in processor architecture, such as graphics processing units (GPUs), many integrated cores (MICs), and field-programmable gate arrays/data flow engines (FPGSs/DFEs), made it possible to build platforms that were both more efficient and effective. These newer servers had enough power to handle the high-capacity demands of modern cloud-based computing.

Hyperscale data centers have managed to provide the best of both worlds, expanding their physical footprint with the very latest in high-performance computing infrastructure.

It’s Not Easy Being Big

While hyperscale data centers provide serious advantages for organizations that need huge amounts of computing power from their cloud providers, these facilities do present a major challenge in terms of transparency. The IT topology of a typical (if such a thing exists) hyperscale infrastructure is so massive that the traditional methods for monitoring traffic flow are woefully insufficient because they simply don’t work or aren’t cost effective at the hyperscale level.

The sheer volume of data traffic can be overwhelming, especially since the typical speed of connections are only increasing. Ensuring the resiliency of so many connections in the event of failure or performance anomalies is another major challenge. With more and more companies deploying Internet of Things (IoT) devices, the total amount of data flowing back to these facilities will certainly continue to grow.

Although edge computing architectures can relieve some of this IoT-induced pressure, companies are also doubling down on the kind of computing heavy data analytics hyperscale facilities make possible. Finding an effective strategy for managing these diverse resource demands will certainly be a challenge for IT professionals.

As organizations continue to make the shift from private server solutions to cloud-based applications, hyperscale data centers will play a major role in their future IT operations. The world’s largest cloud providers are already expanding their hyperscale infrastructure to meet the demands of these customers.

While it remains to be seen if there is an upper limit on the size and scope of these facilities, for the time being, they show no signs of slowing.

 
Speak to an Expert

About Tom Banta

Tom is the Senior Vice President of Product Management & Development at vXchnge. Tom is responsible for the company’s product strategy and development. Tom brings over 30 years of technology experience and software development to vXchnge. Tom joins vXchnge from BMC Software. During his tenure, Tom held roles such as Vice President of Global Operations and Vice President of Development. Prior to BMC, Tom spent several years in executive roles at ClickSafety, Harbinger Corporation, and JD Edwards.

  • Connect with Tom Banta on: