Blog Feature
Kaylie Gyarmathy

By: Kaylie Gyarmathy on October 9th, 2014

Print/Save as PDF

Need For Speed: New Data Center Standards Hit the Ground Running

Data Center Performance | data center standards | data centers | MIT

Subscribe to vXchnge Blog

At the beginning of July, some of the world's leading tech companies – including Google, Microsoft and Broadcom – formed the 25 Gigabit Ethernet Consortium. The group announced plans to create both 25 gigabit and 50 gigabit standards, which will replace existing 10 and 40 gigabit links. But the Consortium isn't alone in trying to redefine data center standards; work by the Massachusetts Institute of Technology (MIT) also aims improve data center speed.

Paving the Way

Last March, the Institute of Electrical and Electronics Engineers (IEEE) met in an effort to define Ethernet specifications, but couldn't reach a conclusion thanks to what ZDnet describes as a “perceived lack of support.” As a result, the Consortium decided to create two initial specifications: a single-lane 25 gpbs and dual-lane 50 gbps. Both are royalty-free and according to the group, “maximizes the radix and bandwidth flexibility of the data center network while leveraging many of the same fundamental technologies and behaviors already defined by the IEEE 802.3 standard.”

In other words, these tech giants are hoping to future-proof data centers by increasing access speeds now, rather than after cloud-based workloads start to bog down servers and increase latency. Initial timelines have these new data center standards rolling out over the next 12 to 18 months.

Getting a Pass

Meanwhile, MIT is developing a way to speed up packet transmission in the data center, according to PC World. The new technology, called Fastpass, changes the way server switches decide how packets move across a network. Right now, packets are often jammed into massive lineups, each waiting to be analyzed and then sent on their way by a switch. Because switches operate independently and with little knowledge of the system as a whole, packets can take a significant amount of time to process. Fastpass eliminates this problem by using a central server or 'arbiter' which manages a large portion of data center switches simultaneously. This permits the use of parallel computing to determine packet destination and transmission order almost instantly.

The end goal for Fastpass is to reduce the amount of bandwidth needed in data centers by increasing switch efficiency – MIT hopes to have this technology market-ready in the next two years.

Faster Together

Both the consortium's work and MIT's innovation are responding to the same need: improved data center standards. It starts with better Ethernet connections, ready to handle cloud workloads and speed-of-access requirements. Combined with efforts like those from MIT, however, these standards may have the added benefit of reducing total bandwidth needs – creating data center standards that are both speedy and lightweight.

Next Steps:

 
Speak to an Expert

About Kaylie Gyarmathy

As the Marketing Manager for vXchnge, Kaylie handles the coordination and logistics of tradeshows and events. She is responsible for social media marketing and brand promotion through various outlets. She enjoys developing new ways and events to capture the attention of the vXchnge audience.

  • Connect with Kaylie Gyarmathy on: