With massive amounts of cable running through the facility and a dizzying array of ports and plugs to manage, data center network infrastructure can be a confusing topic for anyone who isn’t used to building or managing those systems. Fortunately, the basic principles of a data center networking architecture are relatively simple, which is a big plus for customers looking to colocate their valuable IT assets with a facility.
With so much focus on different types of connectivity and how companies actually put their networks to use, it’s easy to forget about the physical infrastructure that makes any data center networking architecture possible. Cabling is a hugely important aspect of data center design. Poor cable deployment is more than just messy to look at—it can restrict airflow, preventing hot air from being expelled properly and blocking cool air from coming in. Over time, cable-related air damming can cause equipment to overheat and fail, resulting in costly downtime.
Traditionally, data center cabling was installed beneath an elevated floor. In recent years, however, designs have shifted to utilize overhead cabling in at least some capacity, which often helps to reduce energy costs and reduce cooling needs. Well-managed facilities also utilize structured cabling practices to ensure consistent performance and better ease-of-use. Unstructured, point-to-point cabling may not take as long to install initially, but will often lead to higher operational costs and significant maintenance problems. Proper cable management is a good first step in data center networking 101 practices.
One of the chief advantages of a carrier-neutral data center is the wealth of ISP connectivity options. Fundamentally, a data center connects to the internet like any other user: through a dedicated service provider’s line. Unlike a typical building, however, data centers have multiple connections available from different providers, allowing them to offer a range of options to their customers.
Having multiple connectivity options also provides a great deal of redundancy, ensuring that the facility will almost always have access to the outside internet. Blended connectivity options also provide substantial protection against distributed denial of service (DDoS) attacks.
Data center cabling is complicated enough as it is, but it would reach nightmarish levels of complexity without routers and switches to direct data traffic flowing into and through the facility. These devices serve as centralized nodes that make it possible for data to travel from one point to another along the most efficient route possible. Properly configured, they can manage huge amounts of traffic without compromising performance and form a crucial element of data center topology.
Incoming data packets from the public internet first encounter the data center’s edge routers, which analyze where each packet is coming from and where it needs to go. From there, it hands the packets off to the core routers, which form an aggregated layer at the facility level. Since these devices manage traffic inside the data center networking architecture, they’re more accurately described as switches.
This collection of core switches is called the aggregation level because they direct all traffic within the data center environment. When data needs to travel between servers that aren’t physically connected, it must be relayed through the core switches. Since individual servers communicating with one another would require a huge list of addresses for the core to manage and compromise speed, data center networks avoid this problem by connecting batches of servers to a second layer of switches. These groups are sometimes called pods and encode data packets in such a way that the core only needs to know which pod to direct traffic toward rather than handling individual server requests.
In many ways, servers are the engines of data center networking architecture. They store valuable data, provide the processing power for computing workloads, and host various applications and services. While they may appear to take up a relatively small space on a typical data center topology map, it’s important to remember that the entire network infrastructure is set up to facilitate server performance.
High-density server deployments tend to have higher requirements in terms of cabling, cooling, and power supplies. Many colocation customers often want to place their equipment in racks with easy access to direct connections and single cross-connects that offer them improved performance and speed with minimal risk of downtime.
Sometimes the data center’s internal network just isn’t fast enough to meet a customer’s needs. They may not be able to afford the possibility of latency or downtime when connecting to cloud service providers. In these cases, data centers can offer them the advantages of connecting their servers directly to the provider’s servers with a single cross-connect. By running a direct cable between the servers, colocation customers can get the very best in performance while minimizing the possibility of latency and downtime.
Some facilities even go a step beyond this by offering direct outbound connections. Microsoft’s Azure ExpressRoute, for example, allows Azure customers to access their servers directly to Microsoft’s cloud servers with a dedicated connection that bypasses the public internet entirely. For companies that need the best possible connection in terms of speed and security, services like Azure ExpressRoute are difficult to beat.
While data center networks are complex environments that must be managed carefully to ensure high-level performance, the basic structure found in any facility operates on roughly the same principles. By optimizing these networks to offer their customers a variety of innovative services, data centers can continue to make themselves an attractive option for companies looking to colocate their IT infrastructure with a third party provider.