Today’s organizations rely on their IT infrastructure to deliver products and services to customers. Whether it involves collecting and safeguarding personally identifiable information or utilizing powerful cloud computing services, companies need to place data availability and system uptime near the top of their concerns when developing long-term strategies for sustainable growth.
Downtime, or the unexpected loss of access to network services, can occur for a variety of reasons. Whether due to human error, equipment failure, or cyberattack, unplanned downtime can be an expensive problem for even the largest companies, negatively affecting their bottom line profits for many years after the event occurs and even resulting in catastrophic data loss.
Get Your “9s” In a Row
When it comes to uptime statistics, nothing is more important to the bottom line than evaluating how close a service provider gets to perfection. A typical service level agreement (SLA) measures uptime as a percentage, indicating how often the provider’s servers are up and running during a specific period of time (usually on a monthly or yearly basis). This is incredibly important for any business that relies upon online services such as cloud development platforms, e-commerce tools, or sales CRMs. Losing access to those systems for even a short period of time can have a major financial impact on a company.
If a cloud services provider promises to deliver 99.99% monthly uptime (like Amazon’s AWS platform does), then customers can expect those services to be unavailable for .01% of the month. That might not sound like much, but it actually amounts to a little over four minutes of downtime. While this outage might be distributed over the course of the month, with a few seconds occurring here and there, it’s equally likely that it could happen at the worst possible time, costing a substantial revenue loss.
Consider, for example, what might happen if a company’s website hosted on AWS is unavailable for just two minutes during a period of high traffic. Customers may try to access the website only to have it fail to load. If they’re like most internet users, they’ll grow impatient and move on within five seconds. And once they’re gone, they may never come back at all, even if they’ve been a customer previously. If that sounds like an exaggeration, it’s actually an understatement: according to Salesforce’s research, 74 percent of customers report being likely to switch brands if they find the purchasing process too difficult.
Even seemingly acceptable uptime statistics, then, can have a major impact on a business when things (inevitably) go wrong.
The Bottom Line on Unplanned Downtime
While Gartner places the average cost of unplanned downtime at $5,600 per minute, the dollar value doesn’t quite tell the entire story. For many organizations, losing access to data and systems can be very disruptive. The outage obviously has an immediate impact on productivity, but the effects can linger even after service is restored. A UC Irvine study conducted on workplace interruptions found that it took workers an average of 23 minutes and 15 seconds to resume their task at the same level of engagement and productivity they had prior to the disruption.
The impact of lost opportunities also extends beyond customers who were negatively affected by an outage. Not only are they unlikely to return, but they could very well convince other potential customers to avoid the company. The average dissatisfied customer tells between 9 and 15 people about their experience, and 13 percent of them tell more than 20 people. It’s safe to assume that they’re not painting a positive image of a company when they talk about how they couldn’t access the services they needed. Those people may not be customers today, but after hearing about systems being unavailable, they probably won’t become customers in the future, which can impact a company’s long-term growth potential.
For companies looking to build their brand and plot a course for sustainable business growth, relying on anything less than 99.99999% SLA uptime is an unnecessary risk. With less than 0.263 seconds of unplanned downtime each month, having seven “9s” of uptime reliability can ensure that data and mission-critical systems are available when they’re needed most while avoiding the risk of data loss.
Data Centers to the Rescue
Unfortunately, AWS’s less-than-reassuring 99.99% SLA uptime is typical of most public cloud providers. The sheer scale of their operations makes it difficult for them to offer more robust services. Fortunately, organizations that need greater levels of reliability can turn to colocation data centers to implement solutions that deliver vastly superior results.
Apart from their ability to manage their infrastructure more closely to provide more reliable uptime, carrier-neutral colocation facilities are also able to build redundant networks that allow companies to keep their critical systems up and running even if their cloud provider goes down. For businesses that host their own services on their own equipment, colocation data centers can provide all the backup infrastructure necessary to ensure that servers stay powered on and connected even in the midst of a natural disaster. Data centers can also build multi-cloud and hybrid cloud solutions that protect companies from the unplanned downtime and data loss risks associated with some vendors and cloud partners.
With all the threats posed by even a brief instance of unplanned downtime, it’s no wonder that so many companies are turning away from the public cloud and entrusting their critical infrastructure to third-party data centers. These facilities have the resources and expertise to deliver superior uptime statistics that allow their customers to stay focused on growing their businesses.
About Ernest Sampera
Ernie Sampera is the Chief Marketing Officer at vXchnge. Ernie is responsible for product marketing, external & corporate communications and business development.