New Technologies, Consumer Demand Drive Data Center Growth
Cutting-edge equipment and approaches emphasize speed, energy efficiency, and cost savings
Not that long ago, data servers were shoved into closets or empty offices as building owners and managers utilized every space possible to house an ever-expanding amount of computer equipment. While some of these data closets probably still exist, many companies have turned to dedicated data centers to satisfy their never-ending appetites for more data storage and management.
These data centers take varying forms, from enterprise-owned facilities to co-location data centers to cloud computing to edge data centers. Regardless of the type, data centers are being designed with innovative techniques that speed up deployment while increasing energy efficiency.
The data center market is going strong, and there are no signs it is slowing down, said Joe Reele, vice president, datacenter solution architects, Schneider Electric.
“Certainly, there are areas, such as enterprise data centers, that may not be growing as fast, but there are other areas, like the edge, that are poised for prolific growth,” he said. “A lot of this growth can be attributed to the continued evolution of the Internet of Things [IoT] and the proliferation of smart devices.”
Indeed, this is one of the biggest changes that has taken place in the data center market. Instead of businesses driving the demand for data centers, consumer technologies, such as smartphones and tablets, are driving growth.
“The more tablets consumers use, the more powerful smartphones become and the more social media that’s being consumed — all of these elements are playing into data center growth,” Reele said. “When combined with technologies that have not been fully developed and deployed on a large scale, like autonomous cars and same-day delivery drones, there is potential to drive even more growth.”
While cloud computing and storage will likely account for some of this growth, companies continue to maintain some assets in their own data centers or co-location facilities. In fact, 70 percent of organizations still maintain their own data centers, said Edward Henigin, CTO, Data Foundry. “For companies wanting to move away from an on-premises data center, we’ve noticed that a regional co-location facility is the most desirable choice. Companies are looking for a premier facility near their offices that provides resilient infrastructure, redundant utilities, and 24/7 security. That’s why we’ve seen significant growth in the Austin and Houston markets.”
They also usually want the co-location facility to provide an on-ramp to the cloud, noted Henigin. “This hybrid model, which combines private and public cloud resources, allows companies to maintain control of their critical infrastructure while dynamically using resources in the cloud.”
Edge data centers are also becoming a larger trend as companies find that in order to provide the service their customers require, they need to keep more data as close to the end user as possible.
“This phenomenon is driving the need for new data centers to be constructed in nontraditional locations, such as Minneapolis; Nashville, Tennessee; Raleigh, North Carolina; and a host of others,” said Jose Ruiz, vice president of operations, Compass Datacenters. “Of course, data center space continues to be coveted in major metro areas like Chicago, Dallas, Silicon Valley, and northern Virginia as well.”
SPEED AND EFFICIENCY
With the growing demand for data centers, speed of deployment is becoming very important to end users.
“The accelerated delivery time frames for new data centers has placed greater attention on the methods used to add capacity. As such, we are seeing the use of items like pre-cast walls and power rooms being built off-site and being delivered on a just-in-time schedule,” said Ruiz. “In today’s hyperscale environments, customers are looking to be able to rapidly add capacity as they need it to existing facilities.”
In addition to utilizing prefabricated components, organizations are looking for designs that are more efficient than traditional brick-and-mortar solutions, said David Johnson, senior vice president, DAMAC.
“This includes the design of the racks and power distribution, the placement and the quantity of CRAC [computer room air conditioning] units, and what will be used for redundancy,” Johnson said. “There are also demands for greater energy efficiency and sustainability.”
This demand for greater energy efficiency is causing an evolution in cooling system design for data centers. While CRAC units are still the most accepted method of cooling, data center operators are constantly looking for alternatives, said Johnson.
“For example, there are data centers in the high desert that, instead of utilizing CRAC units, use the ambient air to cool the data center,” Johnson continued. “They’re simply moving air from the outside, filtering it, and pushing it through the center, and evacuating the hot air, so there is virtually no cooling required.”
Regional regulations are also driving cooling systems to be more energy efficient. For example, the Pacific Northwest does not allow a lot of refrigeration, so all data centers must have economization of design built into their cooling to drive efficiency, said Reele.
“Of course, IT infrastructure, architecture, deployment, and design are the main drivers behind new cooling technologies,” he said. “We are figuring out ways to increase compute without necessarily increasing the cooling need. Still, we see server and rack density driving a more point-of-use [POU] approach instead of traditional perimeter cooling.”
Other significant changes in data center cooling infrastructure include the advent of pumped refrigerant cooling systems, which displace traditional systems, said JP Valiulis, vice president of North America thermal management product strategy and marketing, Vertiv. “Another is the introduction of new types of heat exchangers, such as aluminum, epoxy-coated heat exchangers for large air handling systems, which are far superior in efficiency to heat wheels and legacy heat exchangers. Packaged, contained cooling systems and modular designs are also experiencing growing demand.”
Perhaps more importantly, noted Valiulis, are the advances in thermal system controls that use hundreds of sensor inputs to provide a depth and richness of monitoring unheard of in the past.
“Machine learning can anticipate and tightly manage thermal system controls while automated routines can avoid thermal system problems and reduce the likelihood of human error,” he said.
Reducing human error is of paramount importance in a data center, because if the right people and processes are not in place to take manual control of the infrastructure when something happens (and something will happen), the data center cannot be considered reliable, said Henigin.
“A data center can have top-of-the-line equipment with no single point of failure, but if they don’t have the right people and procedures, it is all for naught,” he said. “Human error is one of the primary causes of downtime.”
While human error is always a concern, the move toward standardizing data centers with the same design and general requirements is well underway and should result in minimizing some of these issues.
“The core components will remain the same — power, cooling, security, etc. — but they will just be deployed in a scalable, prefab modular architecture to optimize space, efficiency, and reliability,” said Reele. “And there will be lots of them.”
Publication date: 8/14/2017