Part 2 - The Cost of Poor Data Quality
This post is the second in a series related to this important topic, and builds on the information previously presented. If you have not read the previous post, Part 1 may be found here.
By eliminating many of the root causes associated with high quality data collection in the Acuity edge computing platform, a breadth of processing and analysis efficiencies become possible.
First, alarming.
The concept of alarming is rather simple at first glance. When something goes out of range or is signaled as abnormal, tell me about via a notification. In practice however, alarming can be rather complicated. At a renewables facility with hundreds of inverters, shading, uneven irradiance, and other environmental factors; thousands of nuisance alarms from every device and alarm condition can quickly overwhelm an operations team. Each one of these alarms needs to be evaluated to determine if the fault is persistent, correlates with actual underperformance/availability concerns, or requires a maintenance response.
By having Acuity operate at the edge, accessing flawless real time data from these devices, alarms can be intelligently filtered, grouped, and classified in a manner that simplifies Operator’s ability to manage multiple complex facilities while improving overall situational awareness.
Specifically, Acuity allows the configuration of alarming delays for both alarm-high and alarm-low conditions to help mitigate alarms that trigger on changing conditions. A common example of this is a DC under voltage alarm as inverters wake up and fall asleep each day at sunrise and sunset every day. The alarm triggers briefly when the inverter first senses array voltage rising in the morning, but there is not enough current available to allow inverter operation without DC voltage collapse. This cycle can happen several times until there is enough irradiance to provide suitable power in the PV array. By setting an alarm-high delay of 1 min as an example, Acuity can delay sending an alarm notification unless the alarm is sustained for 1 min or more without resetting. Similarly, the alarm-low delay can be set to 1 min to prevent the same alarm from resetting unless the alarm remains off for 1 minute or more. These settings, allow alarms to be logged in the system as they occur for audit purposes, but to control the flow of notifications to Operators for non-persistent faults.
In addition, alarms can be grouped. When a recloser trips, or a bank of inverters loose AC power, Operators can receive multiple alarms from each inverter. “Loss or grid”, “Asymmetric AC voltage”, “Underfrequency”, “Abnormal AC contactor operation” alarms could be raised by each inverter impacted by an upstream outage. With even a small quantity of 10 inverters an Operator could receive a total of 40 alarms from a single upstream fault.
Acuity addresses this common phenomenon by grouping alarm events based on comparing alarm-raised-timestamps. Generating a single alarm notification that provides a digest of correlated alarms that triggered within a relative timeframe from each other. The result, an Operator no receives a single alarm notification for all of the affected devices in an alarm state, allowing the operator to better understand the upstream source of the fault, rather than receiving 40 individual faults that could take hours to decipher.
In the next post, I’ll discuss reporting and the concept of normalizing units and pre-calculated metrics … KPIs.