Dealing With Legacy Cooling Systems in Data Centres: Small Changes Can Make BIG Differences!
Energy efficiency is a major concern for most data centres at the moment, as energy prices continue on their steep upward trajectory.
We’re seeing many companies undertaking reviews to address legacy energy inefficiency issues particularly in older facilities, looking at ways to reduce waste and increase productivity.
But the problem isn’t confined just to older data centre buildings; IT systems and electronic equipment are rapidly evolving. This has led to rising processing power and shrinking footprints, which is increasing the heat densities within individual racks. This is an issue that needs to be addressed at every IT system upgrade, along with an appraisal of the existing cooling system to see whether it is still fit for purpose.
Adding new servers without upgrading cooling equipment typically means that the existing cooling units have to work much harder. Inevitably, this leads to a rise in their energy consumption, and it can mean a reduction in the lifetime of the whole IT/cooling system. By contrast, a cooling upgrade will not only save money, it will also help maintain optimal working conditions for the sensitive IT equipment that each rack houses.
Even if your existing climate control meets the needs of your equipment, there are still small improvements that you may be able to make to increase the efficiency of your cooling system, reduce energy consumption, and lower your costs.
Evolving Cooling Systems
Back in the 1990s, most data centres used perimeter cooling systems and server racks were typically cooled by air that circulated under the floor before emerging from perforated floor tiles immediately in front of the racks.
The racks were fitted with glazed doors with perimeter apertures and perforated rear doors, all of which were perfectly adequate for the relatively low loads of the racks (c. 2-3kW) of the time.
Within a few years, as IT systems became more powerful and rack densities increased, these cooling systems started to struggle. Added to which, poor cable management created air flow constraints that made it more difficult to supply this cooler air from the plenum into the racks.
All this means that the electronic components within the racks began to get hotter.
Recognising the magnitude of the issue has led data centre operators to adopt new cooling schemes with fully vented front and rear doors and stricter room layouts, using cold and hot aisles.
Cold & Hot Aisle Containment
Dividing the racks into hot and cool regions means cooler air can be drawn into the front of the racks and waste hot air is exhausted through the backs of the cabinets into a hot aisle.
Correctly separating the cool and hot air zones means higher air temperatures are returned to the cooling units, which increases the efficiency of the cooling system.
Cold Aisle Containment (CAC) systems contain the cold air within an aisle, forcing it to pass through the IT equipment, rather than escaping around the cabinets.
The system is facilitated through Computer Room Air Conditioning (CRAC) Units, which fill the plenum below a raised floor with cold air. The cold air is then directed up through perforated floor tiles in front of the racks and pulled across their internal electrical components.
CAC can form part of an In Row cooling arrangement, with roof panels and sliding doors installed at either end of the rows. CAC can be easily retrofitted to existing locations and it can accommodate fire suppression systems that are already in place. Because of its design, it can also be built underneath existing cable trays and other existing structures.
Another option, which works along the same basic principle as CAC, is to isolate the hot exhaust air that is returning to the CRAC units; a scheme known as Hot Aisle Containment (HAC). However, retrofitting these systems in legacy sites can be challenging because the hot air must be routed back to the CRAC units through a hot air plenum (i.e. in the raised ceiling void), which may not be feasible in some locations. Furthermore, because the hot air is being confined, engineers working in the back of the racks may be subjected to extremely hot conditions.
So, when new projects arise, it’s worth installing cabinets that are already set up to maximise airflow. Good practice includes blanking-off unoccupied ‘U’ space/s using blanking panels, as well as filling in the often-overlooked spaces under the cabinets.
In Summary
A meticulous approach to layout, cable management, and airflow control must always be considered, to maximise the energy efficiency of your data centre.
To find out more, please contact Rittal today.