NREL’s data center is arranged with hot aisle containment. As the server equipment is cooled, it pulls air from the room through the front of the cabinet and blows warmed air out the back of the equipment into the containment aisle. (Credit: Dennis Schroeder)

Data is something computer users often take for granted. All of those ones and zeros have to live somewhere and their “home” often is business data centers. Like any home, data centers use energy - and lots of it. In 2006, the U.S. Environmental Protection Agency (EPA) published a report forecasting that by 2011, “national energy consumption by servers and data centers could nearly double … to more than 100 billion kWh, representing a $7.4 billion annual electricity cost.”

The data center for the U.S. Department of Energy’s (DOE’s) National Renewable Energy Laboratory (NREL) recently benefited from an “extreme home makeover” as it moved from leased office space to the lab’s ultra energy efficient Research Support Facility (RSF). The lessons learned are expected to help data centers across the nation green up their ones and zeros.

HOW GOOD IS YOUR PUE?

Power usage effectiveness (PUE) is a key metric for determining how green a data center is and it shows how effectively a data center uses power. PUE is the ratio of the total amount of power used in the data center divided by the amount of power to the computer equipment. The best score a data center can earn is 1.0.

“Our PUE has gone from 3.3 in our old location to 1.15 at the RSF; last month our average was 1.12,” said Chuck Powers, NREL manager of IT Infrastructure and Operations Group. “A data center is typically considered world class when the PUE reaches 1.3 - we’ve redefined world class. We had a lot to live up to and we’ve been successful.”

Like many of the projects related to NREL’s RSF, the beauty of the energy solutions are in their simplicity. While NREL had the advantage of building the data center from the ground up, the practices applied can be used in data centers new and old including:

• Managing airflow - optimizing it and reusing it;

• Using energy-efficient cooling techniques;

• Increasing operating temperatures; and

• Upgrading power back-ups.

Unorganized cables impede airflow and keep hot air from being blown past the cables. As part of its energy efficiency practices, NREL organizes the cables in back of the cabinets. (Credit: Dennis Schroeder)

MANAGING AIR AND REUSING IT

Airflow management is very important to greening a data center. “The back of a server rack often looks like a bowl full of spaghetti noodles,” Powers said. “All of those cords everywhere impedes airflow and keeps the hot air from being blown past the cables. We are very careful to organize the cables in back of the cabinets.”

At NREL, it might be said that the lab has taken aggressive measures to manage airflow, including setting up the aisles so that the cool air and warm air do not mingle.

The data center is arranged with hot aisle containment. As the server equipment is cooled, it pulls air from the room through the front of the cabinet and blows warmed air out the back of the equipment (warm aisle). The RSF data center has the backs of server rows facing each other and the aisle has a ceiling over it and vents to capture the warm air.

“Because of the way we’ve contained the heat, we are able to use the heat from the data center to heat the RSF,” said Powers. For the first Colorado winter in the RSF, the data center provided a significant amount of heating for the 222,000-square-foot facility.

“The hot air from the hot aisle in the data center is 80°F all winter,” NREL Senior Research Engineer Shanti Pless said. “In the winter, we used this waste heat for heating of the RSF’s outdoor air during the day, and at night when the RSF ventilation system is off, it goes to heating the thermal mass in the labyrinth, so that this otherwise wasted heat is available the next day. It is a simple, yet elegant solution that utilizes the building’s concrete structure as a thermal battery.”

Hot air used to be the bane of data centers, but not anymore. Walking into a 1980s server room often required adding a sweater because the facilities were kept cool to keep the machines from overheating. “Today’s servers can tolerate more heat, with a recommended temperature range of 60-80° with less than 60 percent humidity,” Powers said. “Increasing the operating temperature to 80° means you need significantly less cooling and energy. We are continually monitoring our data center temperature to get that number up as high as possible, without impacting our servers.”

MOTHER NATURE HAS A COOL ROLE

A warmer data center means that in cooler climates like Colorado, Mother Nature can provide natural cooling for most of the year.

“You really have to take an inventory of the natural resources available and leverage those resources to help reduce the cooling, or the power load, for your data center,” Powers said. “In Colorado, 70 percent of the year we can just use direct, filtered air to cool the data center. Roughly 30 percent of the year, we can use energy efficient evaporative cooling to cool the air a little further. There are only an average of 33 hours a year where we see a combination of high heat and humidity that require chilled water to cool the air.”

Colorado also has the advantage of an average of 300 days a year of sun. Photovoltaics (PV) are installed on the roof of the RSF and related parking structures. The combined 2.5-megawatt system will offset the annual energy usage for the entire RSF, the parking areas, and the data center.

Because of Colorado’s cool and dry climate, NREL can just use direct, filtered air, drawn in via this intake, to cool the data center 70 percent of the year. (Credit: Dennis Schroeder)

BOOSTING EQUIPMENT ENERGY EFFICIENCY

Moving the data center provided NREL with the opportunity to replace equipment to increase its energy efficiency.

“I had two years to begin the whole replacement cycle before our data center moved to RSF,” Powers said. “We replaced traditional servers with blade servers - that saved us 30 percent on our power. We also were able to virtualize, or take the workload that used to run on 20 or more servers and put it on one energy efficient blade. We went from 302 watts per server to 10.75 watts per server at a 20:1 ratio, a significant reduction in power requirements for servers.”

Another major energy savings for NREL was found in the data center’s UPS or uninterruptible power supply. NREL’s old UPS was 80 percent energy efficient and the new one is 97 percent efficient.

“On a 100 kilowatt (kW) load, right off the top we saved 17 kW, or a 17 percent reduction in energy consumption,” Powers said. “The old UPS produced an additional 20 kW of heat that needed to be cooled. We are experiencing a 19 percent reduction in our total data center power requirements by replacing the UPS with one that is ultra energy efficient.”

The new data center has reduced NREL’s annualized carbon emissions for data by almost 5 million pounds per year and operating costs by $200,000 per year.

Much of what was done at NREL can be repeated in any data center. “There is tremendous opportunity here for retrofits as well,” said Powers. “NREL is now being asked to help other organizations optimize their data centers. A lot of what we have done can significantly improve the energy efficiency in existing data centers, and many of the practices can be implemented at low cost.”

Publication date:07/18/2011