When it comes to network performance, one factor that consistently comes up for both consumers and businesses is speed. Everyone wants faster connections and better performance. Internet service providers are often eager to swoop in and offer higher bandwidth as a self-evident cure for every problem. After all, more bandwidth obviously translates to more speed, right?
Well...not necessarily. Network speed is impacted by a series of factors in addition to bandwidth. Throughput and latency often play just as much, if not more, of a role in performance. Here’s what everyone needs to know about bandwidth vs latency.
Although people often equate bandwidth with network speed, the term doesn’t actually relate to speed directly. Bandwidth is a measure of how much data can be transferred over a communication band over a fixed period of time (usually one second). As the name suggests, it describes the “width” of the communication “band.” The earliest measurements were expressed in bits per second (bps), but modern networks have a much greater capacity. These connections are measured in megabits (Mbps) or even gigabits per second (Gbps).
Part of the confusion over network speed comes from the similarity between a bit, which is used to measure data transfer speed, and a byte, which is used to measure data storage. Bits are abbreviated with a lower case “b” while bytes are abbreviated with an upper case “B.” There are eight bits contained in every byte. A network connection with a bandwidth of 20 Mbps, then, will not be able to download a 20 MB file in one second. It will take eight seconds because the 20 megabyte file contains 160 megabits.
Bandwidth is often confused with two other key data transfer terms: monthly data transfer and throughput. Monthly data transfer refers to the amount of data that travels through a network over the course of a month. A useful example for demonstrating this relationship is a length of water pipe. Bandwidth is analogous to the width of the actual pipe because it determines how much water can pass through it at one time. The monthly data transfer would refer to the amount of water poured through the pipe.
Throughput, on the other hand, measures how much data can be processed by a computer system in a network. If a device has a low throughput, no amount of bandwidth will be able to help it send and receive data any faster. This is the reason why some consumers become frustrated when their high bandwidth connections don’t deliver better performance. The problem isn’t with their network connection, but rather with hardware that lacks the processing power to take advantage of it.
While bandwidth can certainly impact network speed, when most people complain about lag or buffering, they’re often talking about problems associated with latency. When a data packet is sent one location to another, it still needs to physically travel the distance over cables or some form of wireless frequency. Even the fastest fiber-optic cables, however, data is still limited by the laws of physics and will never be able to exceed the speed of light. That means there will always be an upper limit to how fast data can move through a network even under ideal conditions. And conditions are rarely ideal. While the bulk of the transmission may utilize fast network infrastructure, traversing the “last mile” before reaching either end of the connection often involves a hodge-podge of slower connections that reduce speed significantly.
Latency measures the lag between the moment a data packet is sent and the moment it is received and processed. In the early days of internet connections, latency was rarely an issue because bandwidth limitations masked how slowly data traveled through networks. The delay between a request being sent was generally much smaller than the amount of data that could move through the connections, so latency was all but invisible. As higher bandwidth connections have greatly increased download speeds, however, latency is much more noticeable. For example, an image may only take 5 milliseconds to download, but latency may cause users to wait 100 milliseconds before they receive the first byte of data from their download request.
Edge computing framework offers a unique solution to the challenges presented by latency and can help companies maximize the investments they’ve made in increasing their network bandwidth. By using a combination of Internet of Things (IoT) devices and edge data centers geographically positioned in key emerging markets, companies can push more of their processing load to the edge of their network where their end users are located and also offer direct connections within a data center environment. While IoT devices can process a great deal of data locally, edge data centers can service more demanding processing needs without having to pass data on to larger hyperscale facilities located far away. This will greatly reduce latency by minimizing how far data has to travel. When implemented in conjunction with a high bandwidth network, edge computing architecture has the potential to greatly improve performance and help companies to provide much better services to their customers.
Bandwidth may not be the primary determinant of speed for many customers, but it’s important to understand how it impacts network performance along with factors like latency and throughput. The relationship may not be direct, but their interaction has an important influence on speed. If a network is plagued by high latency connections, no amount of bandwidth is going to help it transfer data. Similarly, driving down latency with edge computing deployments may not deliver improved performance if bandwidth and throughput remain low. By working to improve all of these factors, companies can deliver better, faster services to their customers.