Computer networks have become much less centralized in recent years, with many companies embracing the concept of edge computing to deliver services more effectively. Rather than conforming to the old hierarchical structure of traditional cloud computing, edge computing pushes key processing functions to the outskirts of the network where end users are located. Not only are companies finding new ways to collect data there, but they’re also putting data centers in the right location to make sure that data doesn’t have to travel far from the edge to provide benefits to users. One of the primary motivators behind this data center location strategy is the desire to minimize latency.
Latency is one of the enduring problems confronting computer networks. In general terms, latency refers to the interval of time between an action and a response in any system. It has a slightly more specific meaning in a computing network, however, where it serves as a measure of how long it takes for a data packet to be sent from one point and processed in another. While a variety of factors influence how fast data can travel, few play a larger role than physical distance.
Data packets move very quickly through a network, but they are still limited by physical laws. Fiber-optic cable may allow data to move almost at the speed of light, but it can’t quite get there yet and obviously can’t exceed it. Even under ideal conditions, it takes time for data to travel from one server to another. The lag might not be noticeable most of the time, but the accumulation of milliseconds make themselves known when streaming video and audio content buffers or when an online multiplayer video game suffers lag at the least opportune moment.
Over the last decade, companies have increasingly incorporated internet functionality into all kinds of devices and edge computing examples. The research firm Gartner predicts that there could be more than 20 billion of these Internet of Things (IoT) devices connected to the internet by the end of 2020. Many of these devices are located on the outer edge of networks, where they gather data and deliver critical services to users. Edge computing framework, which locates key processing functions on the network edge closer to end users, has allowed organizations to use IoT devices to minimize latency. The less IoT edge devices have to rely on centralized cloud servers, the faster they can respond to user needs.
But IoT has limitations. The onboard processing capabilities of these devices may be improving rapidly, but they will not be able to match the power of cloud computing platforms or dedicated servers anytime soon. That means that in order to reach their full potential, IoT edge devices must still transmit data elsewhere for processing, which brings companies right back to the latency problem. For some edge computing examples, such as those involving autonomous vehicles, latency presents an unacceptable risk to users. In other cases, the mere inconvenience of latency could cause a company to lose customers to other, faster networks.
Fortunately, edge data centers provide an ideal solution for combating latency and empowering IoT devices. Smaller and more versatile than their enterprise-level cousins, edge facilities play a unique role in a sophisticated edge computing framework. Positioned geographically close to end users, they play a dual role by providing services to local markets while also connecting back to larger data center deployments that feature powerful cloud computing resources. For organizations feeding data into powerful machine learning analytics, edge data centers can serve as a relay station that evaluates what data needs to be passed on to the cloud and what needs to remain closer to end users near the network edge. They can also provide additional processing punch for IoT devices with minimal latency since data doesn’t have to travel nearly as far to reach them.
For edge computing examples, data center location is often more important than the services the facility offers. Many of them can be found in smaller, emerging markets that are far away from the hyperscale data centers located near major urban areas. This is especially important for streaming service providers because they can cache high-demand content in these facilities to take both reduce latency and relieve bandwidth pressure on their networks. While many organizations think of multi-data center strategies in terms of data backup, having the right data centers in the right locations allows them to deliver faster, more reliable services to their customers.
As 5G networks continue to roll out over the next several years, mobile edge computing will be a key element of many organizations’ strategies. Edge data centers and micro data centers will be crucial to the success of 5G networks due to their small footprint and ability to serve as connectivity hubs. In fact, mobile edge computing architectures could consolidate the role of data centers and cellular towers, greatly enhancing the functionality of each hub in the network.
Data center location is more important than ever as more organizations embrace the potential of edge computing examples. While centralized cloud networks will continue to play a major role in network strategies, the demands of IoT devices and geographically distributed users will make data center location strategy every bit as important as bandwidth and processing power. Having the right data center in the right place could mean the difference between success and failure in the years to come.