10 Best Practices You Need to Know to Minimize Server Downtime Blog Feature
Blair Felter

By: Blair Felter on October 23rd, 2018

Print/Save as PDF

10 Best Practices You Need to Know to Minimize Server Downtime

data center uptime | standardized best practices

Subscribe to vXchnge Blog

Whether they’re using smaller edge data centers or massive hyperscale facilities, today’s companies are putting a lot of trust into their data center partners. With even a few moments of downtime inflicting enormous costs on their business, these organizations need to know that data centers are doing everything in their power to minimize the threat of server downtime. Fortunately, there are a number of common sense best practices data center providers can implement to ensure that they’re delivering the highest levels of uptime possible to their customers.

10 Best Practices You Need to Know to Minimize Server Downtime

1: Collect Performance Data

The very first step to improving data center efficiency and reducing server downtime is understanding what’s actually happening in the facility in the first place. Gathering performance data and efficiency metrics, both at the data center and server level, can reveal trends, areas of concern, and opportunities for improvement. With a variety of software-management tools to choose from, there’s really no excuse for a data center to not have current metrics on the facility’s performance. This data should serve as the starting point for any changes in ongoing practices to avoid service outage.

2: Share Information

Gathering data is important, but it doesn’t do anyone much good if it isn’t made available. Facility managers and support personnel should obviously have access to performance metrics, but it can also benefit sales teams and even customers. Transparency is a key selling point for successful data centers with a reputation for minimal system downtime. For internal personnel, it’s important that all goals and key performance indicators (KPIs) are based on the same data, which can be easily monitored.

3: Implement Predictive Analytics

Today’s data centers are incredibly complex IT environments. Even minor changes or adjustments to that environment can send ripples through the entire ecosystem that result in system downtime, making it difficult for even trained personnel to anticipate the potential consequences of any change. Using statistical algorithms and AI-driven machine learning, analytics software can gather and analyze data over time and take over critical functions to improve efficiencies, eliminate waste, reduce costs, and minimize server downtime. While the truly unmanned data center may not quite be a reality yet, advanced analytics programs are becoming a vital aspect of facility management.

4: Identify and Eliminate Inefficient Servers

This may sound obvious, but simply removing outdated and inefficient servers through equipment upgrades or virtualization can deliver tremendous efficiency benefits and help avoid service outages. Older servers are also more prone to failure, which can have cascading effects throughout a network. Since these servers also take up more power and space without delivering commensurate processing and storage benefits, eliminating them can greatly improve the facility’s capacity and performance, helping it to minimize server downtime without requiring major changes to physical infrastructure.

5: Optimize Equipment Distribution

Location matters on a data center floor. Air and power aren’t always evenly distributed throughout a facility, so grouping equipment with similar heat load densities and temperature requirements makes it easier to manage cooling needs more effectively. As equipment is moved or replaced, data center personnel should routinely assess how these changes affect air distribution and the potential for short-circuiting that could result in server downtime.

6: Optimize Cooling

Cooling is one of the most important aspects good data center management. If the temperature isn’t quite right, equipment can quickly overheat and fail, leading to a service outage. Too much cooling, however, can create an excess of moisture and lead to failures due to short-circuits and corrosion. Either outcome means more system downtime, making it critical that data center personnel continuously optimize cooling to meet the facility’s needs, which keeps costs down and prevents the kind of failures that contribute to crippling downtime.

7: Overestimate Capacity Limits

As the old saying goes, it’s best to “hope for the best, but prepare for the worst.” However much traffic and usage data centers think their customers will need, they should always assume a lot more will be needed to account for unexpected spikes in usage. A data center pushed to its computing and storage capacity limits will have a hard time delivering high levels of uptime because any surge in usage will push their facility to the brink of a service outage. With so many companies working through data centers to provide “X-as-a-service” platforms to their customers, data centers need to consider how a customer’s business might place additional strain on their networks and equipment, leading to costly server downtime.

8: Monitor Alerts

Every moment of downtime has the potential to inflict significant losses on a company. While not every alert in a data center will necessarily lead to wider system downtime, IT personnel cannot afford to take that chance. Unattended issues can contribute to more widespread problems, so data centers need to implement a no tolerance approach to network challenges. Every system in the facility should be under constant observation so that alerts can be issued and addressed within moments of developing to minimize server downtime.

9: Load Testing

How long can a facility survive power outages or surges while maintaining uptime availability? Without consistent and well-implemented load testing, there’s no way of knowing. Data centers should conduct routine load tests to identify possible faults and vulnerabilities in their power infrastructure that could lead to server downtime and service outage. Regular testing also keeps backup systems up to date and ready to use, rather than letting them sit dormant and leaving open the possibility that they might fail to respond when they’re needed most. These tests should also function as drills for personnel, helping to ensure that everyone knows what to do in the event of a failure.

10: Under-Floor and Overhead Cleaning

Another point that may seem obvious, a little housekeeping goes a long way toward helping data centers minimize server downtime. Accumulated materials like dust can restrict air flow over time, limiting cooling efficiency and creating a risk of overheated servers and service outage. Those same materials can also build up static electricity, which can contribute to short-circuits and other overloads that could potentially take systems offline. Regular, routine cleaning above and below cabinets can ensure that the server room remains an optimal environment for the complex power and cooling systems that help to deliver reliable uptime to customers.

While keeping a data center operating at peak efficiency is a challenge, these common sense steps can help to minimize server downtime. Since the costs of downtime can be so crippling for many small to medium sized companies (and quite embarrassing for larger ones), data centers have an obligation to do as much as they can to deliver on the uptime promises laid out in their SLAs
 
Speak to an Expert

About Blair Felter

As the Marketing Director at vXchnge, Blair is responsible for managing every aspect of the growth marketing objective and inbound strategy to grow the brand. Her passion is to find the topics that generate the most conversations.

  • Connect with Blair Felter on: