Best Practices for Building Analytics
by Jon Schoenfeld September, 2017
This article was originally published in Automated Buildings.
The term “analytics” is thrown around a lot these days. From enhanced visualization to fault detection and diagnostics, analytics can be used to describe it all. And it’s all progress in our quest to make use of the data we collect from modern buildings. However, our ultimate goal is to generate the correct response from the data we’ve analyzed. Analytics that either inform an action of trained operators or automatically improve a system’s operation are called Responsive Analytics, and they’re the end game we’re all working toward. Responsive Analytics are those solutions that drive positive, productive change in our facilities. We use the following best practices to make our analytics more responsive.
Find what matters most to you, quickly and easily
The needs for each company, institution or municipality can vary. Even individual stakeholders may require different information or hold certain findings of higher importance than others. While some may focus entirely on energy usage, others are driven by occupant comfort or equipment reliability. Analytics can certainly provide insight for all, but weeding through what doesn’t interest you should not be a chore. The first step in a successful analytics approach is a carefully thought-out organization of the rules and reports that are actually needed so each company or stakeholder can quickly and easily find what matters most to them.
Beyond a classification of rules, there must also be a way to prioritize the findings of your analytics engine. Ideally, priority would be a proxy for the financial impact of the issue, but that’s not always possible. What’s the cost of uncomfortable tenants? And what is more important, the financial impact of a past event or the future impact if it isn’t resolved? However you evaluate priority, simply assigning a High, Medium and Low value to your rules will not suffice.
Consider this: You log in to your alarm console and look back at the past seven days. You see that a “Medium” priority issue has been happening 24 hours a day for the whole week. During that same week, your “High” priority issue occurred for only one hour. Which issue from that week would be your real highest priority?
Limit nuisance and false alarms
It is better to identify five issues conclusively than to report 10 issues where two are false. Why? Because it is difficult to regain the trust of your facility staff once they’ve spent time investigating a false alarm.
The job of limiting nuisance and false alarms is not an easy one. In some cases, it’s a larger task than identifying the issue, to begin with. Poor data quality, communication problems, faulty sensors, data modeling errors, and controller failure can easily manifest themselves as some unrelated issue, sending your facility staff off on a wild goose chase. If your analytics provider hasn’t explained how they are going to address these concerns, it’s time to ask a few key questions:
- Do you have the mechanical and controls expertise required to develop FDD (fault detection and diagnostic) rules?
- Do you employ adequate delays to prevent momentary anomalies from popping up as real issues?
- How do you prevent sensor and communication issues from causing unrelated false alarms?
- How do you prevent central plant and air handler issues from causing nuisance alarms at the zones?
Send the right message
A crucial step towards responsive analytics is providing a clear and useful message to the user. This starts with a succinct description of the observed conditions which resulted in the fault detection, followed by a list of possible causes and the recommended next steps to solve the problem. These messages should be as detailed as possible without drawing false conclusions. Here are some examples of the full spectrum:
Bad messaging: The AHU is cooling when it shouldn’t be.
Better messaging: The cooling command is on while the Space Temperature is less than the setpoint for two hrs. The cooling may be in override, the effective setpoint may be incorrect, or the control of the cooling command may need to be tuned. Check to see if cooling is overridden and that the effective setpoint is correct for the occupancy status of the AHU.
Best Messaging: The cooling command is on while the Space Temperature is less than the setpoint for two hrs. The effective setpoint does not match the occupied cooling setpoint. Check to see if the occupied cooling setpoint and effective setpoint have been mapped to the correct point for the AHU.
The analytics community strives to create rules and algorithms that pinpoint the root cause of every issue, but in practice, the cause is not always so clear cut. Our last example is an illustration of how two analytics – one for unnecessary cooling and one for an incorrect setpoint – can be combined to get us closer to a root cause. But even with those two analytic points, we require confirmation of the real issue before a solution can be complete. Those who claim their analytics can nail the root cause in all circumstances probably do not have the experience to know why that’s not possible. Instead, your analytics should lead people toward the solution by telling them what additional information must be confirmed.
Don’t try to drink from the firehose
If your analytics engine is worth its weight, it is likely going to produce a lot of findings, especially when you first turn it on. It is easy to be overwhelmed with a seemingly never ending list of action items and problems to investigate. A good system should throttle what is provided to the end user. Whether this comes in the form of targeted reports, advanced issue filtering, or user-defined views the action items for your team should be easy to find and limited to only what can be realistically accomplished. The prioritization mentioned above is a prerequisite for this type of reporting. It’s important to remember that our goal is to solve problems, not just to identify them, so a to-do list of 20 items when you only have time for five isn’t helpful. Especially when the list changes every week.
Beware of cookie-cutter analytics
To be cost effective, analytics are developed in broad strokes so they can apply to many different systems, equipment types, and buildings. That doesn’t mean there isn’t a need to develop algorithms and rules that address the unique issues of your facility. A flexible platform that allows for customization provides the most assurance that you will get what you need from your analytics engine.
Be ready for what’s next
Most analytics packages on the market today are passive, rule-based engines that identify issues and propose solutions. There is significant momentum now for both real-time optimization and machine-learning that will change the way we think about analytics and FDD in the coming years. We’ve all got our eyes on the Responsive Analytics prize, but to get there, we need to establish a strong foundation for analytic understanding with a focus on the usability of the products as they stand now. The information needs to be relevant and readily available. Operators need to see the information improving work flow while managers need to see it impacting the bottom line, all without overloading anyone with information that’s not crucial to meeting those objectives. If your analytics engine is built correctly now, it will enhance workflows and support cost savings while being able to expand quickly as Responsive Analytics takes over in the not-so-distant future.
About the Author
Jon Schoenfeld is Director of Energy and Analytics at Kodaro. He applies his deep technical knowledge of building system design, engineering and operation to Kodaro's growing list of analytics products and services. Schoenfeld has been an engineer in the renewable energy and energy efficiency fields for more than 15 years.