Network performance is impacted by a variety of factors. While most people have a good idea of how bandwidth affects network speeds, they often don’t take into account how the physical characteristics of network infrastructure can introduce significant latency. Nowhere is this more apparent than with the topic of last mile connectivity.
The term “last mile” is commonly used in a networking connectivity context, but it actually originated as a way of describing transportation infrastructure. In a very literal sense, it refers to the “last mile” of a route. Thinking of last mile connectivity in the context of transportation is actually a quite useful analogy for understanding the concept.
Imagine that someone is making a journey from Minneapolis to Philadelphia. The majority of that journey will involve using interstate highways that allow vehicles to travel at high speeds. But reaching their actual destination within Philadelphia will require them to leave those highways and follow a more complicated path into the heart of the city, which will involve turning down a variety of smaller roads with less capacity and lower speed limits.
The amount of time it takes to travel the last few miles of that journey is much greater than it took to travel an equivalent distance on the interstate highway. To make matters worse, the final stretch will seem even longer because streets take various twists and turns due to the way the city is laid out, turning a distance that might only be one mile if it were traced in a straight line into two or three miles by road.
The complex and often inefficient route a traveler must take before finally reaching their destination is known as the last mile problem. Thinking of that same journey in terms of network connections and data packets instead of highways and cars shows how last mile connectivity can present serious problems for overall network performance. Although a network can have high average transfer speeds thanks to fast fiber-optic cabling, data will need to hop, skip, and jump along multiple different connections before reaching its destination. In most cases, these connections will have lower bandwidth and involve routers with lower throughputs, which can significantly reduce overall data transfer speeds.
The amount of time it takes for a data packet to travel from one point of a network to another is known as latency. A high latency connection results in poor performance, such as fragmented, jittery video or long download times. Since data cannot travel faster than the speed of light, there is only so much network engineers can physically do to improve speeds.
As major hubs in most company networks, data centers play an important role when it comes to determining latency. A poorly optimized facility with low data bandwidth and poorly configured routers can make latency problems imposed by distance even worse. In a worst-case scenario, data might even have to travel farther away from its destination in order to reach a server located in a distant data center. For colocation customers, not being able to connect directly to cloud service providers could expose them to a variety of last mile connectivity challenges as they have to depend upon connecting to distant servers to access the cloud resources they need.
Fortunately, data centers also provide a number of solutions to the last mile problem. The first and simplest solution is location. An edge data center positioned close to end users would still face many of the infrastructure problems associated with last mile connectivity, but since data is traveling a much shorter distance, the latency impact isn’t nearly as significant. Data center location strategy is becoming increasingly important as organizations think about ways to incorporate Internet of Things (IoT) devices into their network functionality. Having an edge data center nearby allows IoT devices to have easy and close access to greater computing and storage resources, which helps to improve performance.
Many colocation facilities also offer last mile technology like direct cross-connections, which involves running a cable from a customer’s server to a cloud provider’s. Usually deployed as part of a hybrid cloud solution, cross-connect cabling can help companies bypass the last mile problem entirely. Rather than data having to travel a long distance to reach the cloud provider’s servers, it only has to travel a distance of several yards, which might as well be instantaneous in terms of latency. Well-configured routers that utilize MPLS principles can also help data centers to route incoming data more quickly, which can further reduce latency figures.
Last mile connectivity has become a much greater challenge as network speeds have improved. No one really noticed how much these connections impacted data transfer when the infrastructure couldn’t accommodate enough bandwidth to make latency an issue. As higher bandwidth connections make it possible to transfer more data than ever, however, latency has become one of the most critical aspects of network performance. By implementing strategies to overcome the last mile problem, organizations can improve their network speeds and keep up with customer demands.
As the Marketing Manager for vXchnge, Kaylie handles the coordination and logistics of tradeshows and events. She is responsible for social media marketing and brand promotion through various outlets. She enjoys developing new ways and events to capture the attention of the vXchnge audience.