The most recent Commercial Buildings Energy Consumption Survey (CBECS) shows that office buildings with data centers have significantly higher computing, cooling, and total electricity intensity (consumption per square foot) than office buildings without data centers. That is not a big surprise, given the large amounts of electricity needed to power the servers and cooling equipment that typically operate around the clock in data centers.
The scale of energy consumption may be more of a surprise, though, as estimates show that data center spaces can consume up to 100-200 times as much electricity as standard office spaces. With that large amount of power being used by a growing number of data centers, the pressure is on to implement energy-efficient design measures that can save money and reduce electricity use.
Given the increasing focus on reducing energy use, ASHRAE recently created Standard 90.4-2016, “Energy Standard for Data Centers.”
“Energy use has probably not been the primary concern for data centers in the past, because many were built with greater attention given to the management of risk due to their mission-critical statuses,” said Ron Jarnagin, chair of the 90.4 committee.
Standard 90.4 is a code-intended performance standard designed to work in concert with Standard 90.1, “Energy Standard for Buildings Except Low-Rise Residential Buildings,” which will still provide criteria, such as envelope, lighting, and water heating for data centers, said Jarnagin. “The heart of Standard 90.4 is contained in the mechanical and electrical sections and offers a performance-based compliance approach, which focuses on meeting targets for the mechanical and electrical equipment’s energy use. There is also an alternative compliance method that allows tradeoffs between the mechanical and electrical sections as long as the overall system’s design value is met.”
While the new standard applies to both new and existing data centers, it lacks specific requirements on how to design or retrofit data centers. That’s because it is a performance standard, so it does not provide prescriptive requirements, like airflow rates or types of equipment, but rather focuses on a performance approach that is more flexible and less constraining, said Jarnagin. “We worked very hard to craft this standard in a manner that does not stifle innovation in the data center industry while simultaneously offering criteria to help ensure energy savings.”
Innovation is definitely afoot in the data center industry, with manufacturers offering many different types of cooling equipment that consume much less energy. “Computer room evaporative cooling [CREC] is seeing the highest adoption rates today,” said David Roden, cooling product marketing manager, Schneider Electric. “The use of CREC systems helps lower the power usage effectiveness [PUE] by shifting some of the electrical consumption of a typical data center for the cooling infrastructure to IT [information technology] power consumption, which helps to make the data center more efficient.”
High-efficiency DX systems with economizer modes of operation are also readily available, which enables data centers to be built with very low PUEs, and they consume no water, said Jack Pouchet, vice president of market development, Vertiv, formerly known as Emerson Network Power. “We also have new multi-mode chiller plants with economizer modes of operation that enable us to deploy chilled-water systems that are extremely efficient and use much less water than traditional systems with cooling towers.”
While server cooling technology is being driven by location, need, and reliability, there is a definite trend toward using free cooling whenever possible, including evaporative and outside air, said Scot Seifert, director of sales – data centers, Alfa Laval. “There are also new technologies and ways to implement free cooling, like server immersion cooling, underground, or in the ocean, but these are not practical for most data centers.”
The new Arctigo low-speed ventilation (LSV) system from Alfa Laval is another way to reduce energy use, as it uses lower air speed to fully saturate data center servers with temperature-controlled air at all times. “This results in a non-pressurized server room without hot spots and with about 30 percent less energy consumption,” said Seifert. “Plus, the equipment is located outside the server room, which makes service access more convenient and secure.”
Even though there are a lot of new cooling technologies available, computer room air conditioners (CRACs) will continue to be important parts of data centers’ infrastructures, noted Roden, but optimization of these units will be key. “Older CRAC units can use a great deal of energy to control humidity, but with ASHRAE’s loosened guidelines for humidity control, facility managers can save energy by simply updating control parameters. With new software and management tools, this can be done automatically by configuring CRAC units to adjust temperature settings and airflow across the data center to provide greater cooling efficiency.”
In addition to replacing — or optimizing — the cooling equipment, there are a number of other ways to improve overall energy efficiency in an existing data center. Many of these strategies are inexpensive, require very little disruption to daily operations, and can address approximately 80 percent of the energy waste in a data center, said Roden. “These include implementing a data center infrastructure management [DCIM] solution to improve power utilization, cooling, and rack capacity by monitoring energy consumption; installing a high-efficiency UPS system; using variable-frequency drives [VFDs] to improve control over HVAC functions; and powering off unused equipment, which is a pretty basic step that often gets overlooked.”
Other energy-saving strategies include managing airflow with blanking plates in the racks and installing various forms of containment to ensure proper delivery of cold air while capturing hot air for return as quickly as possible, said Pouchet. “These are especially effective when coupled with new temperature monitoring systems installed within the IT space; establishing cooling system control based upon delivered air temperature, which enables the cooling units to work together in unison as a complete system; adding or enabling variable speed to the air delivery system; and adjusting the temperature closer to the higher end of the ASHRAE thermal guidelines, such as a nominal 72°-75°F.”
Payback on these items is typically very fast, said Pouchet, often as quick as a few months for blanking plates and containment to as long as 18-24 months for new control systems with VFDs. “In addition, many regions’ local utilities have incentive programs available to help offset these costs.”
As more data center owners look to reduce energy usage, contractors who are well-versed in all these energy-saving strategies are going to be very much in demand.
“Contractors should understand the various technologies, products, and tools and know how to apply them to existing data centers looking to become more efficient,” said Roden. “Knowing how and when to recommend upgrades, such as containment systems, VFDs, and air economizers, are important capabilities for contractors to have. In addition, professional service offerings, such as data center efficiency assessments or cooling optimization services, can lead into follow-on offerings, such as product installation and maintenance services.”
In other words, it is a good time to get involved in providing energy solutions for data centers.
Publication date: 2/6/2017