Alan Seal

By: Alan Seal on July 31st, 2019

Print/Save as PDF

Tips and Tools for Troubleshooting Network Latency

Cloud | Data Center Infrastructure

Today’s companies often live or die by their network performance. Facing pressure from customers and SLA uptime demands from their clients, organizations are constantly looking for ways to improve network efficiency and deliver better, faster, and more reliable services. One of the major challenges every network faces is overcoming latency. Effectively troubleshooting latency can often mean the difference between losing customers and providing the high-speed, responsive services that meet their needs.

Bandwidth vs Latency

No discussion of troubleshooting network latency would be complete without a brief overview of the difference between latency and bandwidth. Although the two terms are often used interchangeably, they refer to very different things. Bandwidth measures the amount of data that can travel over a connection at one time. The greater the bandwidth, the more data that can be delivered. Generally speaking, increased bandwidth contributes to better network speed because more data can travel across connections, but network performance is still constrained by throughput, which measures how much data can be processed at once by different points in a network. Increasing bandwidth to a low throughput server, then, won’t do anything to improve performance because the data will simply bottleneck as the server tries to process it.

Latency, on the other hand, is a measurement of how long it takes for a data packet to travel from its origin point to its destination. While the type of connection is a key consideration (fiber optic cables transmit data much faster than conventional copper, for example), distance remains one of the key factors in determining latency. That’s because data is still constrained by the laws of physics and cannot exceed the speed of light (although some connections have approached it). No matter how fast a connection may be, the data must still physically travel that distance, which takes time.

Network Latency Test

How much time? There are a few easy ways to conduct a network latency test to determine just how great of an impact latency is having on performance. Operating systems like Microsoft Windows, Apple OS, and Linux can all conduct a “traceroute” command. This command monitors how long it takes destination routers to respond to an access request, measured in milliseconds. Adding up the total amount of time elapsed between the initial request and the destination router’s response will provide a good estimate of system latency.

Executing a traceroute command not only shows how long it takes data to travel from one IP address to another, but it also reveals how complex networking can be. Two otherwise identical requests might have significant differences in latency due to the path the data took to reach its destination. This is a byproduct of the way routers prioritize and direct different types of data. The shortest route may not always be available, which can cause unexpected latency in a network.

How to Reduce Network Latency

Latency is certainly easy to notice given that too much of it can cause slow loading times, jittery video or audio, or timed out requests. Fixing the problem, however, can be a bit more complicated since the causes are often located downstream from a company’s infrastructure. Fortunately, there are a few steps they can take to reduce network latency.

1. Edge Computing

Since distance is one of the primary culprits when it comes to latency, simply reducing the distance data has to travel can often result in much better network performance. Edge computing architecture pushes key processing functions and content deliver away from the core of traditional cloud networks and closer to the outer edge where end users are located. By using edge data centers that are physically located closer to customers, companies can reduce the latency in their service offerings, whether that’s through better response times for Internet of Things (IoT) devices or by caching popular media content for seamless streaming.

2. Multiprotocol Label Switching (MPLS)

Effective router optimization can also help to reduce latency. Multiprotocol label switching (MPLS) improves network speed by tagging data packets and quickly routing them to their next destination. This allows the next router to simply read the label information rather than having to dig through the packet’s more detailed routing tables to determine where it needs to go next. While not applicable for every network, MPLS can greatly reduce latency by streamlining the router’s tasks.

3. Cross-Connect Cabling

In a carrier-neutral colocation data center, colocation customers often need to connect their hybrid and multi-cloud networks to a variety of cloud service providers. Under normal circumstances, they connect to these services through an ISP, which forces them to use the public internet to make a connection. Colocation facilities offer cross-connect cabling, however, which is simply a dedicated cable run from a customer’s server to a cloud provider’s server. With the distance between the servers often measured in mere feet, latency is greatly reduced, enabling much faster response times and better overall network performance.

4. Direct Interconnect Cabling

When cross-connect cabling in a colocation environment isn’t possible, there are other ways to streamline connections to reduce latency. Direct connections to cloud providers, such as Microsoft Azure ExpressRoute, may not always resolve the challenges posed by distance, but the point-to-point interconnect cabling means that data will always travel directly from the customer’s server to the cloud server. Unlike a conventional internet connection, there’s no path routing to consider, which means the data will not be redirected every time a packet is sent through the network.

Colocation data centers offer a number of valuable tools for troubleshooting network latency. Although the technology may not exist (yet) to send and receive data through a network instantaneously, strategies like edge computing and cross-connect cabling provide colocation customers with effective options for combating latency to deliver faster, more reliable services.

 
Speak to an Expert

About Alan Seal

Alan Seal is the VP of Engineering at vXchnge. Alan is responsible for managing teams in IT support and infrastructure, app development, QA, and ERP business systems.

  • Connect with Alan Seal on: