Kayla Matthews

By: Kayla Matthews on April 22nd, 2019

Print/Save as PDF

The Challenges of Data Center Optimization

Data Center Infrastructure | Data Center Operations

Back in 2016, the Office of Management and Budget released the Data Center Optimization Initiative. It called for the improvement of government-related data centers, specifically dealing with five metrics: server utilization, energy metering, facility utilization, power consumption and virtualization.

For each metric, the initiative established goals that would expressly see government-owned facilities being used more effectively. The general idea was to eliminate ballooning costs, improve operations and enhance security.

Unfortunately, just about every federal agency failed to meet the goals set. Only two of 22 total agencies were expected to hit their target by the end of 2018. But a GAO report reveals that the reality is much worse. Not a single agency has met goals pertaining to all five metrics. Worse yet, ten out of 22 agencies failed to meet even a single goal.

This certainly highlights the difficulties associated with data center optimization. How can these facilities achieve more with less money? It’s a question for the ages.

Here are some of the bigger challenges every data center faces, let alone government-owned facilities:

1. Efficient Use of Physical Space and Power Capacity

It doesn’t matter whether it involves an innovative edge data center or something in-house, all data facilities must be able to optimize capacity in many forms. There’s only so much space in a server room and so much space on a physical server. Proper monitoring and metrics can help balance this problem, along with any power consumption challenges that arise.

More importantly, advanced analytics tools can be used to discover zombie servers and old-unnecessary equipment that’s still using up valuable power. If and when there’s a call to expand, the metrics can help identify not just where to place the increase in capacity, but also how much growth is necessary.

Constant monitoring will essentially help cut down on the aggressive expansion of most data centers, which, by nature, eats up a lot of resources, including energy and money.

2. Enforcing Physical and Digital Security

When talking about digital technologies — such as cloud computing and the like — it’s easy to focus solely on cybersecurity. For data centers, however, there’s also a physical component to security particularly when it comes to authorization and access. Give the wrong person access to a physical server and they can wreak havoc.

This means that any data center company must focus on both aspects of their security when just one of those channels is challenging enough. Combine that with the fact that most data facilities are managed remotely, and things get even more complicated. That explains why many organizations have started including both forms of security under the heading of MIM — Major Incident Management. In other words, they’re starting to treat both security channels equal.

Again, the use of monitoring and analytics tools can really help here. Security surveillance and access logs, user reporting and regular audits can help cut down on physical security problems, as well as digital vulnerabilities. The server areas should only be accessed for maintenance or emergencies, and only then by authorized personnel who have been cleared for potential security risks.

In the event there is a breach or security problem, the stringent monitoring tools can help provide forensic analysis as a means to discover the perpetrator quickly.

3. Tracking Virtualized Resources

When eliminating old equipment and unused servers or backing up existing systems, virtualization is a godsend. It allows near-instantaneous support for increasing capacity if only in a temporary format.

But it poses another challenge, as resources stored on virtualized servers are much harder to track. The problem is exacerbated if and when content is stored and then either goes undocumented or is forgotten. It creates a well-known problem, similar to what happens with physical servers. Unused platforms continue to draw power and resources because no one is quite certain what they contain.

This highlights the need to monitor and document everything going on within the data center. It’s not just about tracking security and maintenance, it’s also about effectively detailing how systems are being used and for what. With a clear understanding of which virtualization platforms are in use, those that are being underutilized can be restructured or disabled altogether.

4. Implementing Automation to Deal with Scale

Every data company, big to small, is currently amassing large stores of data that grows day after day. It’s to the point where manual management is completely cost-prohibitive, yet that doesn’t stop many facilities from trying.

IDC survey data shows that 45 percent of IT staff time is spent on routine operations like provisioning, configuration, monitoring, maintenance, troubleshooting and remediation. All of those activities could be automated in some way to cut down on the amount of time staff are wasting.

Providers must either start or continue to invest heavily in automation and artificial intelligence, specifically for its use in data center operations. Databases have grown too immense to handle manually, and that includes just about every aspect from monitoring and health to more effective organization. Automated systems can be used to ensure the data is placed correctly, labeled and followed, and maintained.

Challenges Abound

The challenges and difficulties discussed here are but a small sample of what most major facilities have to deal with. It’s not an easy task managing existing capacity and balancing that against incoming demands. Not to mention, society relies more and more on these kinds of big data operations every day and that’s only going to continue into the future.

But however difficult, data center operators and providers should endlessly strive for efficiency as it means lower operating costs, higher capacity and better use of existing resources.

 

 
Speak to an Expert

About Kayla Matthews

Kayla Matthews writes about data centers and big data for several industry publications, including The Data Center Journal, Data Center Frontier and insideBIGDATA. To read more posts from Kayla, you can follower her personal tech blog at ProductivityBytes.com.

  • Connect with Kayla Matthews on: