Given the false starts in the 1980s and the largely unfulfilled promises of the 1990s, being fashionably tardy for the 2000s is refreshing. It would be far too easy to point fingers at those institutions and individuals who caused the delay of meaningful technology to be implemented in our industry. Instead, let’s talk of how things have come together and the resultant benefits to building owners, occupants, and the enterprises that need comfortable, efficient, and productive environments.
Computerized building automation control systems have run a parallel, if compressed, path. In the early 1970s, we struggled with developing automated alternatives to the electrical and mechanical devices relied upon for comfort and safety. Then we sold them aggressively but found that many building owners and operators were unaware of what benefits could be derived from automation. So marketing became a requirement, with the focus on saving energy dollars, increased productivity, and reduced requirements for facilities staff.
In the mid-1990s, we began to get serious about providing communication links between systems (connectivity and interoperability became common terms of desire), using open protocols and standard networks — and even a little bit of access via the Internet. With a few notable exceptions, though, information on the building and conditions within it was only delivered to the facilities department, and was generally ignored by upper management as being of little importance to the health and welfare of the enterprise. Or so it seemed.
First, we found that simply delivering building systems data to management was not enough. For it to be meaningful to business managers, that data must be condensed into smaller pieces connected to outcomes.
In other words, we had to translate data into consequences. So, telling a senior accountant that the vibration sensor on the motor side bearing of a 1,000-ton chiller now reads 0.29 inches per second at the primary frequency of the device and nearly half that at the first harmonic is like telling your golden retriever that your feet are cold when what you really want is for the dog to get your slippers. The information provided should support the decision that funds are necessary to repair the chiller now, as opposed to a more costly and extensive rebuild that would result from ignoring the situation. (See example in sidebar below.)
The choice of action may not always be this obvious, but it certainly bears a close resemblance to “get the slippers.”
Second, we found that delivering the information to a dedicated workstation, even if information from all building systems was consolidated there, is not enough. Information has to be available to management on an anywhere, anytime basis. At the very least this means on every computer on every desktop and away from the office via a virtual private network. Even facilities supervisors are ignoring the computer systems in their control rooms in favor of having critical alarms forwarded directly to their pager or wireless phone.
Third, we found that information must be easy to access with a user interface tailored to the needs of the particular class or function of the user. Static “HTML” style pages for access over the Web are fine for looking up the start time of a movie at the local cineplex, but a richer, more intelligent user interface must be delivered if third parties are to take advantage of the free flow of information from building systems.
Fourth, we also found that the delivery of information to individuals or groups of individuals is not enough. While management decision-making is aided by the quick, easy and intuitive delivery of appropriate information to individual users, the future of information flow is much more dependent upon delivery from computer to computer. This provides further opportunity for consolidation and analysis, as well as setting the foundation for intelligent systems applications (which were promised by a number of manufacturers in the marketing phase of our industry.)
Finally, we learned that whatever the technological platform chosen to deliver the benefits derived from these wants and needs, it must be compatible and fully integrable with the information technology (IT) infrastructure that exists in the enterprise today and tomorrow. The IT department is looked upon as a provider of services for the safe and efficient transportation of digital information.
Attacking the problem in reverse order makes sense. To facilitate the use of the IT infrastructure, it is necessary for the building systems to communicate over the Internet provider (IP) networks that have become ubiquitous in our business enterprises. This means complete compliance with the transport protocol so that all hubs, switches, and routers are compatible and no special equipment is necessary. Also, it probably means that any devices sharing the network will work best if based upon a standard operating system and hardware platform. This would certainly include, but be largely limited to, standard operating systems from Microsoft, Apple, or Linux and hardware that matches the current state of the art for PDA, PC, and server-class machines.
Regarding the delivery of information across the network to people or other computers, this is accomplished through Web services. In the past in the building controls industry, several standard protocols were developed with the promise that they delivered system integration and interoperability — LonTalk, BACnet, and N2. Each protocol has strengths and weaknesses in terms of what it can do. But each requires hours of programming and individually customized solutions to deliver the connections they promise.
Web services allow two or more applications to share information and work cooperatively over the Internet, using a common language called XML. Several companies provide toolsets for creating Web services — IBM with Web Sphere, Sun Microsystems with Java (J2EE), and Microsoft with .Net. Johnson Controls chose to work with Microsoft and incorporate the .Net framework into the Metasys platform. A toolset provided with .Net is designed to help develop applications that have Web services built right into them, so you don’t have to understand all the details of the code, or physically write all the code. It is designed to allow someone to come up to speed fairly quickly to develop applications with Web services built in.
We believe the key to delivering information across the Web to a distributed user interface is the ability to leverage a Web-based system as opposed to a Web-enabled system. The difference is inherently in how the system is constructed. Web-enabled means having a system that browser access has been added to; you’ve bolted on a Web server to talk to the system in order to give the user a view of some of the information in connected devices.
Web-based means that at its core, in the way the system is built, the Web is a central component of how the system works and communicates. Web access is not an add-on. It’s inherent to the system’s design.
There are some things you cannot do in a Web-enabled environment. When you pull up a Web page, there may be only a couple of things you can do. Often, you’re limited to a certain framework, and you can’t add or delete a lot of views or information. You’re basically looking at an interface for viewing information, but you are not necessarily able to take action.
With a Web-based system, not only can you view that information within a browser, you’re able to take action: to respond and acknowledge alarms, to command points, and to do tasks.
This is a fundamental difference between the building automation systems (BAS) of today and the new systems we are developing. New capabilities allow a user to bring up multiple screens in a browser. You can detach different pieces of a screen within a browser framework, providing a better way of assembling information and of maximizing the value of your screen real estate.
A Web-enabled system does not permit that. Similarly, a Web-based system can provide command and control capability for all points connected to the network, while a Web-enabled alternative will be limited by Web page design and server capabilities. And you don’t have to load a bunch of software to get access to a system. At the most, all you need is a plug-in to your browser and away you go.
The workstation is dead — long live the distributed, Web-based user interface. By delivering a Web-based system with a truly flexible and complete distributed user interface, there is no need for dedicated workstations except for the most critical systems, where life safety or validation requirements are best served by such an implementation. In all other cases, it is more cost effective and more user friendly to hop on the Web and get the information required, anywhere, anytime.
In health care, if I run a hospital and my mission and goal is quality patient care, does my facility impact that mission? Definitely. It has an impact in terms of the overall environment, integrating information about environmental conditions such as temperature and humidity into a comprehensive patient care picture.
For example, it means being able to link room scheduling with patient medical records, so you can make sure the room meets the requirements of that patient. That might mean assuring negative pressure for infectious disease control, or providing a nurse the information necessary to determine whether a patient who is complaining about being cold is in a room with a below-normal temperature, or has some underlying medical problem. Instant access to information about all aspects of the environment allows changes to be made that keep staff and patients satisfied.
Another opportunity for delivering value to the enterprise reflects the bi-directional capabilities of Web services. Computerized maintenance management systems (CMMS) and providers of remote monitoring systems have traditionally relied on the BAS to push data to them based upon alarms and predefined limits for total run hours or the number of operations. A Web services-based solution would allow the BAS to push data to these systems as usual, but also for the BAS to be interrogated by the CMMS or remote monitoring computers for other vital systems data.
The result of this push-pull strategy would be better maintenance procedures based upon analysis of key performance indicators as opposed to simple work order preparation based only upon fixed events. Expansion of these principles to energy analysis, emergency response evaluation, and occupant comfort analysis as it applies to productivity would be easy to implement.
Another example is an airport, where flights come in throughout the day and night. If the BAS can automatically tap into the flight information system, it can turn on the lights in the right gate area, turn on the power, and bring the space to a comfortable temperature in time for the flight to arrive, or travelers to gather to wait to depart. Then the energy consumption information can be sent back to the plant management system in order to bill that particular airline for the energy costs. You also have the ability to take that data and pump it back to a hotel. Airlines could provide flight information as a service to the hotels with which they have joint marketing agreements.
Welcome to the 21st century.
Hoffman is with Johnson Controls Inc., Milwaukee.
1. Have bearing replaced this weekend or next:
Unplanned maintenance cost = $3,300.
Office downtime = 0.
2. Chiller failure during productive office hours:
Chiller rebuild cost = $29,000.
Office downtime = 8 to 24 hours, depending upon time of failure.
Resultant productivity impact = $72,000.
Publication date: 10/06/2003