Not so many years ago, it was fairly simple: You were called on to install a couple of computer room air conditioning (CRAC) units up against the wall of the data center. You connected the piping, controls, and the electrical service; performed a routine startup of the new units and unless you sold a preventive maintenance agreement, you were done.
At this time, engineers and data center operators were working on the premise of how many watts per square foot the information technology (IT) equipment would consume, therefore reject, into the room in the form of waste heat.
The calculations were simple enough; the conversion from watts to Btuh was made, and the appropriate tonnage CRAC units were ordered and installed. This process worked pretty well for many years, and even though the CRAC unit placement was not ideal, usually there was enough tonnage to totally saturate the room with cold air. The mindset concerning electrical costs was, “They are what they are.”
FASTER AND HOTTERIn most businesses, as in life, rules change. Moore’s Law states, “Every 18 months, the computing capacity, meaning work done by a computer, will double”; therefore, the amount of heat given off by the computing equipment will rise as well.
That law drives many decisions in the IT field. As a result of those decisions, the cooling of those very same data centers took on a whole new complexity. No longer may we work based on watts per square-foot calculations. Today it’s all about kW per rack of IT gear. The introduction of larger-capacity, faster, and therefore hotter computers (especially blade-style servers) has forced many engineers to analyze the physics of cooling the data room.
The focus now, and going forward, is heat removal and, in particular, capturing the heat as close to the source as possible. Today’s CRAC units do not sit on a wall at the perimeter of the data center; they are strategically located within the rows of IT equipment, or otherwise positioned to ingest all of the hot air possible - the higher the return air temperature is, the more efficient the overall operation is.
The units themselves do not resemble air conditioners. More often than not they look just like a rack of IT equipment, same dimensions and the same name brand in many cases.
The current mindset regarding electrical cost seems to be, “We must be as efficient as possible in order to pay as little for power as possible.”
Many existing data centers have an abundance of heat-removal capacity, read as tons. The problem is the supply and return (as hot as you can get it) air distribution. Think about this: It is highly inefficient to mix the supply and return air of any mechanical system; you wind up only extracting a portion of the heat from the room into the airstream, where it can then be transferred into the chilled water or refrigerant and finally be expelled from the room.
SAVING MONEYSo how do you, as a service or installation partner for your IT-savvy clients, help them save money? As always, start by asking questions. What kind of input may the client need from you? Could you possibly save the client operating dollars by studying the layout of the IT gear and making placement suggestions based upon heat load? Would it make sense to duct or reduct either the supply or return air in an existing facility? Is this the only site they have in operation, or is there a disaster recovery (DR) site elsewhere?
Start looking around on the jobsites you go to; what is the IT staff installing and working on? Are the rows of equipment placed in a hot aisle-cold aisle design? Do they have to really turn the set points down low to keep the cooling on for longer cycles? Are the CRAC units fighting each other, meaning some are cooling, some are reheating, and others are fighting the humidity set point?
All of these are signs of inefficiency and waste. What happens if you lose a compressor due to short cycling of air due to poor perforated tile placement? Will they drop the critical load? What are their expectations of uptime? Is it 99.999 percent of the time? If so, that is still 5½ minutes a year of outage; is that OK? Can you commit to responding to those requirements? What tier level does their company commit to with their customers - tier one, two, three, or maybe even four?
BE A RESOURCEHVAC is only a slice of the pie that critical facilities managers are responsible for; make it easy on them to communicate their needs. By learning what is driving their business, maybe you can figure out a way to help them stay online, viewing you as a resource, not just the heating and air guy.
You must also be highly cognizant of any issues associated with the green movement - in particular, are any of the data centers you are involved with connected in any way to a LEED-certified building? If so, what impact may that have on operations, operating procedures, chemicals for coil cleaning, etc.? Are the CRAC units draining their condensate into a reservoir for irrigation water or cooling tower makeup water?
These are just a few of the interwoven complexities of operating a green building and an efficient data center, while ensuring that a balance can be struck with all the requirements listed by an ever-growing list of certification agencies (and oh, by the way, keeping the equipment online and removing heat from the critical space).
By showing your willingness to step outside of the normal HVAC contractor stereotype, you may be brought in on some planning for future projects and become an ever-more-important partner to your clients. Isn’t that where we all want to be?