Why IT Teams Should Care About Data Center Tier Classification Blog Feature
Kaylie Gyarmathy

By: Kaylie Gyarmathy on September 4th, 2018

Print/Save as PDF

Why IT Teams Should Care About Data Center Tier Classification

data center tiers | data center uptime

Subscribe to vXchnge Blog

Data centers live or die on their reputations for reliability. The whole idea behind colocating with a data center partner is that their facility and services will help to keep business operations up and running more efficiently and cost effectively than a private data center. When organizations make the decision to entrust their IT infrastructure with a data center, they want to know they’re getting the very best in reliability.

Choosing the right data center can be daunting. Every business has different needs in terms of data storage and computing capacity, which could lead them to partner with smaller, more versatile edge data centers or massive hyperscale facilities. No matter their needs, however, the first step should include a close look at the data center tier classification. This rating contains a great deal of information about the facility, and it’s worth taking a closer look at the factors that influence it.

What are Tier Classifications?

Data centers are typically classified according to the Tier Certification system developed by the Uptime Institute in the early 1990s. Consisting of four categories, each level represents higher projected availability, so a Tier III facility would offer better reliability than a Tier II facility. The tiers don’t necessarily reflect size, power, or storage capacity, but Tier III and IV data centers often have larger footprints due to hardware and power redundancy demands.

Although not every data center officially follows these classifications, the tier classification system often finds its way into marketing and promotional materials. Facilities are quick to point out that they meet certain standards, but that doesn’t always tell the whole story about their capabilities.

Calculating Uptime

Broadly speaking, uptime is the amount of time a server remains up and running on a regular basis. In most cases, people think about uptime over the course of a year. Keeping the math simple, if a company claims to provide 99% uptime, then customers can expect its servers to be down 1% of time, or 3.65 days over the course of a year.

Working backwards, IT professionals can calculate server uptime over any period of time using the following formula:

(total time over a given period - total downtime) / total time over a given period

Using this formula, if a data center experienced an hour of downtime throughout the course of a year, the equation to calculate its uptime rate would be:

525,600 minutes - 60 minutes / 525,600 minutes = 99.98% uptime

That rating would be good enough to qualify as a Tier III data center, which typically experiences no more than 1.6 hours of downtime annually.

Looking Behind the Numbers

While many customers are only interested in how much time they can expect their servers to go down, IT professionals tend to have a deeper understanding of what that number actually means.

Reliable uptime doesn’t just happen on its own. Data centers must take a variety of steps to ensure their systems are available as much as possible throughout the year. The most common method is establishing redundant systems that safeguard against equipment failure and power outages.

Data centers measure their infrastructure needs in terms of the value N. For any facility, N equals the capacity required to power, backup, and cool a facility running at full potential. Tier I data centers have no redundant systems, making them vulnerable to service outages and failures. Tier II facilities may have some limited backup systems, but lack the ability to maintain operations across the entire IT infrastructure in the event of a problem. These facilities are therefore considered to have an N rating.

Tier III data centers are typically N+1 facilities. This means that redundancies exist to support both limited failures and maintenance. While an N+1 data center is still vulnerable to operational error and unexpected failures, it is considered concurrently maintainable or parallel redundant. It can supply backup power for short periods of time and accommodate some system outages without sacrificing uptime. But since everything is on the same power system, widespread failures can still take the entire data center offline.

Tier IV facilities, however, usually have ratings of at least 2N. This means that they have twice the equipment, power, and cooling to keep their services running. A true backup, 2N systems are far more reliable because even in the event of a complete power outage, a fully independent system can keep the facility online. And for even further redundancy, 2N+1 data centers provide additional backups to critical systems. For businesses that can’t afford even brief periods of downtime, 2N facilities offer the highest level of reliability.

Finding the right data center that fits a company’s needs can be a major challenge, but one that can be overcome with careful research. For IT professionals evaluating potential partners, understanding the meaning behind data center tier classifications provides a good surface level idea of a facility’s capabilities. Armed with this knowledge, they can help their organizations make the best possible decisions when it comes to colocation.

 
Speak to an Expert

About Kaylie Gyarmathy

As the Marketing Manager for vXchnge, Kaylie handles the coordination and logistics of tradeshows and events. She is responsible for social media marketing and brand promotion through various outlets. She enjoys developing new ways and events to capture the attention of the vXchnge audience.

  • Connect with Kaylie Gyarmathy on: