Sustainability and energy efficiency should be the goal for managing every data center. Here’s how historic adversaries facility management and information technology can collaborate on several green initiatives.
From water conservation strategies to implementing smart building technology, facility managers are making big strides toward sustainability. But your success can be slowed by the spaces that house your IT equipment – an area that’s usually not under your direct supervision.
That’s not to say your IT department isn’t doing their part. In fact, migrating your company’s compute workload to the public cloud is a huge step toward a net-zero carbon future. Accenture reports that cloud migrations can reduce CO2 emissions by nearly 60 million tons a year, and that a company’s initial cloud migration alone can reduce carbon emissions by more than 84 percent without any major redesign.
Reductions can be pushed even higher if you improve the efficiency of what remains on the premises. That small server room or network closet that houses telecommunications equipment or backup power systems is still using lots of energy and generating significant heat. The easiest and most common method used to remove that heat and ensure IT uptime is to overcool the space. While effective, its inefficient and wasteful – both in terms of money and carbon emissions. But the good news is that your cloud provider has developed cooling optimization best practices and invested in technology to reduce power consumption, and you can employ quite a bit of it in your smaller critical environment.
Managing What You Monitor
While data center cooling is a science in its own right and can be complex, it can be boiled down to what goes in, in terms of power, must come out, in terms of heat. In other words, every kilowatt of power consumed in a computer room becomes a kilowatt of heat that needs to be removed from the computer room and, ultimately, the building. Therefore, it’s important to monitor the power being demanded by IT equipment (the IT load) to properly match it with the cooling being supplied by cooling units (cooling capacity).
This type of monitoring is commonplace in large data centers, but all too often the manager of a small server room doesn’t have that insight into server temperatures and power consumption. To compensate, the admin overcools their systems to prevent infrastructure failure. While effective, they’re wasting huge amounts of energy.
So how do you know how much power is being consumed by your IT load? And how do you know how much cooling is required to properly cool that load? Answer: with solutions that can monitor both the power and cooling infrastructure in your IT environment.
At the rack, use smart power strips with power meters that can be aggregated and network-attached so more sophisticated software can manage and look at their deployment as a whole. At the room level, utilize a monitoring solution with environmental sensors placed at the tops and bottoms of all cabinets. This will provide a sitewide view of your server room’s thermal performance and humidity. Some monitoring solutions have even taken this a step further with 3D visualizations that will display a digital twin of the room and its real-time thermal characteristics.
Cooling Infrastructure Optimization
Thanks to monitoring, you’ve got the data you need to properly match cooling capacity with IT load (i.e., optimize the cooling infrastructure). Now it’s time to confidently make some cooling infrastructure changes that will translate into cooling energy savings, improved cooling capacity, improved IT equipment reliability, and deferred capital expenditure.
Reduce Fan Speeds: Adjusting fan speeds to meet the changing load requirements is the easiest way to manage operating cost. Variable Frequency Drives (VFDs) and Variable Speed Drives (VSDs) can be retrofitted for fixed speed fans, enabling the fan speed to be adjusted based on operating conditions. This small piece of technology can yield BIG results: A 20 percent reduction in fan speed results in a 20 percent reduction in airflow but almost a 50 percent savings in fan energy.
Raise the Temperature: Raise cooling unit temperature set points as high as possible without exceeding the maximum allowable IT equipment intake air temperature. Higher return air temperatures result in increased cooling capacity and improved efficiency.
Manage Humidification: Expand the allowable relative humidity (Rh) band to prevent cooling units from “fighting” with each other (wasting energy because one unit is trying to dehumidify while another is trying to humidify). Humidification is best done at a room level, not by individual cooling units.
Turn Off Excess Units: As you progressively eliminate the need for excessive cooling capacity, cooling units that aren’t equipped with VFDs can be placed in standby mode.
There’s no question that these cooling optimization best practices will be a manual and iterative process, but their energy-saving and sustainability potential makes them well worth it.
Lithium-ion Batteries for Backup Power Systems
As your facility heads toward sustainability, using equipment and technology that lasts longer and produces less waste is just as important as reducing your energy usage. Many data centers recognize and address this by using lithium-ion batteries to power their Uninterruptible Power Supplies (UPS).
Admittedly, the initial setup and commissioning costs associated with lithium-ion UPSs are more than those that use valve-regulated lead acid (VRLA) batteries. But being more environmentally responsible, along with lower lifetime costs, and greater power density makes a li-ion UPS the smart choice for your smaller IT space.
- The life of a VRLA battery is typically 3 to 6 years, while a li-ion battery boasts a service life of 10+ years. That delivers up to 50 percent or more in TCO savings, chiefly because li-ion UPS require fewer – if any – battery replacements over their lifespan. You save not only on the cost of batteries but on the labor required to replace them.
- Li-ion batteries are 70 percent smaller and 60 percent lighter than VRLA yet have the same energy potential.
- The charging capabilities of Li-ion UPSs are four times faster than VRLA.
- Reduced cooling costs: some li-ion batteries can operate at higher ambient temperatures than VRLA thereby battery cooling costs by as much as 70 percent.
Making the Case for Micro Data Centers
Many buildings in the U.S. were constructed in the last century before climate change became a top business priority. While most buildings in metropolitan areas have been remodeled to allow for energy efficiency, the HVAC systems in many smaller market buildings has not been upgraded. As a result, even after implementing all the previously mentioned IT infrastructure improvements, those edge facilities may not be able to achieve noticeable efficiency improvements. In this circumstance, it may be worth considering a micro data center to meet energy use and carbon reduction requirements, much less to prove any results.
Not only can a micro data center always match your cooling and energy efficiencies with your IT load, but it will reduce the cooled area from an entire room to an enclosure the size of a refrigerator. Micro data centers combine power, cooling, physical security, and monitoring software in a self-contained rack enclosure that can be up and running just a few days. If you’re facing space and/or power limitations, a micro data center may end up being the most cost-efficient solution for achieving IT sustainability goals.
Teamwork Makes the Green Work
Businesses of all sizes are making great strides toward efficiency and sustainability. But that progress will eventually stall if facility management and IT management don’t work together to address the power and cooling requirements of IT infrastructure. Now more than ever is the time for IT staff and facilities administrators to turn to data center management solutions to gain the requisite clarity into power consumption, significantly reduce energy use, slash mounting energy bills, and mitigate their carbon footprint.
Mario Figurelle ([email protected]) is the Senior Vice President of Critical Environments Group. With two decades of experience, Figurelle is CEG’s expert regarding remote access and monitoring, power monitoring, and all facets of Data Center Infrastructure Management.