Most people in the tech industry understand the impact virtualization technologies have on improving scalability, utilization rates, server flexibility, cloud computing, and your data center power density.
Consolidating hardware by putting more servers into a single physical server can achieve higher CPU utilization rates. Where stand-alone servers may typically run 1% to 5% utilization, a virtualization server can average over 50% utilization.
By taking 10 to 20 low utilization servers and consolidating them into one high-density virtualization server, the total cooling load for your entire data center should decrease. However, the cooling needs for your high-density servers will need to increase to compensate for the high-density machines taking their place.
These virtualization machines use more power and produce more heat. The question is; what effect does this virtualization have on your data center?
When physical servers are loaded with more virtual machines, CPU utilization will increase. As these machines go from 5-10% utilization to 50% utilization, the additional power draw can average around 20%. The additional processor and memory usage can further raise this power consumption.
Virtualization has introduced some new problems into the data center. The increased density potentially requires more power-per-cabinet than older, lower-density methods. This increased power consumption may require electricians to rewire cabinets to handle the higher capacity load. Or, you may be required to move to a data center that is specifically designed to handle higher density loads.
Higher data center power density resulting from virtualization can also cause cooling challenges. High-density machines can produce greater quantities of heat within a small area, which is more difficult to disperse. This could lead to hotspots in your data center.
One possible solution is to spread out the load by separating the high-density machines. This method has certain disadvantages like half-filled cabinets, increased cabling costs, and uncontained air paths. However, if space is not a problem, spreading out servers in this manner can be relatively inexpensive.
A second solution is to isolate your high-density machines to their own island. This will require that you improve your cooling strategy. When creating a high-density island, it must be capable of handling higher density power requirements of approximately 20 kW per cabinet or more.
This island must also be thermally neutral to the rest of the room so that the additional heat created around the island will not affect the temperature elsewhere. This requires hot and cold air containment to get rid of the heat, and careful attention must be made to not circulate the hot air into the low-density sections of the data center.
Hot aisle / cold aisle designs for data centers are made to conserve energy and lower overall cooling cost by properly managing airflow. Containment systems are used to isolate the cold aisles from hot aisles to keep hot and cold air from mixing. Some of the best practices for hot and cold aisle containment are:
Cold aisle systems fill the aisle between racks with cool air, forcing hot air out into the surrounding aisle, which is then taken to the chiller. Hot aisle systems push cool air in from outside the aisle and push hot air into the aisle where it is then routed to the chiller. The main difference between these systems is how efficient they are at dissipating large amounts of heat generated by virtualization servers. Hot aisle systems can save 40% in annual energy costs and is considered a best practice. While cold aisle does allow for compartmentalized cooling – allowing you to seal off hot spots in the data center – it isn’t as efficient year-round.
Virtualization can be a powerful way of consolidating servers and improving capacity for cloud and other types of computing. If properly planned, the effects of virtualization on data center power density can be mitigated. Using a data center designed for high-density servers will allow your company to grow your server configurations for the future.