US-12626584-B2 - Systems and methods for mitigating false alarms in a building management system
Abstract
An alarm is raised. One of a plurality of Generative Artificial Intelligence (AI) Large Language Model-based autonomous primary agents is activated based at least in part on an alarm type of the alarm and performs an initial analysis of the alarm and creates plausible causes for the alarm. The corresponding Generative AI Large Language Model-based autonomous primary agent autonomously assigns each of the plausible causes to one or more Generative AI Large Language Model-based autonomous subagents that perform an analysis of the assigned plausible cause and returns a result back to the Generative AI Large Language Model-based autonomous primary agent, which classifies the alarm as a false alarm or a true alarm.
Inventors
- Abdhul Ahadh
Assignees
- HONEYWELL INTERNATIONAL INC.
Dates
- Publication Date
- 20260512
- Application Date
- 20240826
Claims (20)
- 1 . A method for alarm processing of alarms of a Building Management System (BMS) of a facility, wherein the BMS includes a plurality of BMS components placed at known locations about the facility where the plurality of BMS components include a plurality of sensors, the method comprising: receiving a plurality of alarms from the BMS; normalizing each of the plurality of alarms into a normalized alarm format, wherein the normalized alarm format includes at least an alarm type and an alarm timestamp; for at least some of the plurality of alarms, activating a corresponding one of a plurality of Generative Artificial Intelligence (AI) Large Language Model-based autonomous primary agents based at least in part on the alarm type of the respective alarm, where the corresponding Generative AI Large Language Model-based autonomous primary agent is trained using domain knowledge that corresponds to the alarm type of the respective alarm; the corresponding Generative AI Large Language Model-based autonomous primary agent performing an initial analysis of the respective alarm and creating one or more initial scenarios for determining whether the respective alarm is a false alarm or a true alarm, wherein each of the one or more initial scenario relates to one or more scenario domains; the corresponding Generative AI Large Language Model-based autonomous primary agent autonomously assigning each of one or more of the initial scenarios to one or more of a plurality of Generative AI Large Language Model-based autonomous subagents based at least in part on the one or more scenario domains of the respective initial scenario, where each of the corresponding Generative AI Large Language Model-based autonomous subagent is trained using domain knowledge that corresponds to the respective scenario domain; each of the Generative AI Large Language Model-based autonomous subagents performing an analysis of the assigned initial scenario and returning a result back to the Generative AI Large Language Model-based autonomous primary agent that assigned the initial scenario to the respective Generative AI Large Language Model-based autonomous subagent; and the Generative AI Large Language Model-based autonomous primary agent receiving the result from each of the Generative AI Large Language Model-based autonomous subagents that were assigned a respective initial scenario from the Generative AI Large Language Model-based autonomous primary agent, and based at least in part on the received results, classifying the alarm as a false alarm or a true alarm.
- 2 . The method of claim 1 , wherein one or more of the Generative AI Large Language Model-based autonomous subagents, when performing the analysis of the assigned initial scenario, autonomously assigning one or more sub-tasks to one or more of the plurality of Generative AI Large Language Model-based autonomous subagents based at least in part on the domain of the sub-task and the domain knowledge that the corresponding Generative AI Large Language Model-based autonomous subagent was trained.
- 3 . The method of claim 1 , wherein the Generative AI Large Language Model-based autonomous primary agent reporting a confidence score in the classification of the alarm as a false alarm or a true alarm.
- 4 . The method of claim 1 , wherein the Generative AI Large Language Model-based autonomous primary agent reporting a reasoning behind the classification of the alarm as a false alarm or a true alarm.
- 5 . The method of claim 1 , wherein one or more of the plurality of Generative Artificial Intelligence (AI) Large Language Model-based autonomous primary agents and/or one or more of the Generative AI Large Language Model-based autonomous subagents is configured to gather data from one or more of the plurality of BMS components of the BMS.
- 6 . The method of claim 5 , wherein the Generative AI Large Language Model-based autonomous primary agent reporting gathered data that support the classification of the alarm as a false alarm or a true alarm.
- 7 . The method of claim 1 , wherein one or more of the plurality of Generative Artificial Intelligence (AI) Large Language Model-based autonomous primary agents and/or one or more of the Generative AI Large Language Model-based autonomous subagents changes an operation of one or more of the plurality of BMS components in response to classifying the alarm as a true alarm, and does not change the operation of one or more of the plurality of BMS components in response to classifying the alarm as a false alarm.
- 8 . The method of claim 1 , wherein one or more of the plurality of BMS components includes one or more Video Management System (VMS) components of a Video Management System (VMS) of the BMS, and one of the Generative AI Large Language Model-based autonomous subagents is a Video Management System (VMS) Analyzer subagent that is trained using domain knowledge related to the Video Management System (VMS) of the BMS.
- 9 . The method of claim 1 , wherein one or more of the plurality of BMS components includes one or more Fire Detection components of a Fire Detection System of the BMS, and one of the Generative AI Large Language Model-based autonomous subagents is a Fire Detection Analyzer subagent that is trained using domain knowledge related to the Fire Detection System of the BMS.
- 10 . The method of claim 1 , wherein one or more of the plurality of BMS components includes one or more Security System components of a Security System of the BMS, and one of the Generative AI Large Language Model-based autonomous subagents is a Security System Analyzer subagent that is trained using domain knowledge related to the Security System of the BMS.
- 11 . The method of claim 1 , wherein one or more of the plurality of BMS components includes one or more Heating, Ventilation and/or Air Conditioning (HVAC) components of an HVAC system of the BMS, and one of the Generative AI Large Language Model-based autonomous subagents is HVAC Analyzer subagent that is trained using domain knowledge related to the HVAC system of the BMS.
- 12 . A system for alarm processing of alarms of a Building Management System (BMS) of a facility, wherein the BMS includes a plurality of BMS components placed at known locations about the facility where the plurality of BMS components include a plurality of sensors, the system comprising: an input/output; a controller operatively coupled to the input/output, the controller configured to: receive a plurality of alarms from the BMS via the input/output, wherein each of the plurality of alarms has an alarm type; for at least some of the plurality of alarms, activate a corresponding one of a plurality of Generative Artificial Intelligence (AI) Large Language Model-based autonomous primary agents based at least in part on the alarm type of the respective alarm, where the corresponding Generative AI Large Language Model-based autonomous primary agent is trained using domain knowledge that corresponds to the alarm type of the respective alarm; the corresponding Generative AI Large Language Model-based autonomous primary agent performs an initial analysis of the respective alarm and creates one or more plausible causes for the respective alarm, wherein each of the one or more plausible causes relates to one or more corresponding domains of a plurality of domains; the corresponding Generative AI Large Language Model-based autonomous primary agent autonomously assigns each of one or more of the plausible causes to one or more of a plurality of Generative AI Large Language Model-based autonomous subagents based at least in part on the one or more domains of the respective plausible cause, where each of the corresponding Generative AI Large Language Model-based autonomous subagent is trained using domain knowledge that corresponds to the respective domain of the plausible cause; each of the Generative AI Large Language Model-based autonomous subagents performs an analysis of the assigned plausible cause and returns a result back to the Generative AI Large Language Model-based autonomous primary agent that assigned the plausible cause to the respective Generative AI Large Language Model-based autonomous subagent; and the Generative AI Large Language Model-based autonomous primary agent receiving the result from each of the Generative AI Large Language Model-based autonomous subagents that were assigned a respective plausible cause from the Generative AI Large Language Model-based autonomous primary agent, and based at least in part on the received results, classifying the alarm as a false alarm or a true alarm.
- 13 . The system of claim 12 , wherein one or more of the Generative AI Large Language Model-based autonomous subagents, when performing the analysis of the assigned plausible cause, autonomously assigning one or more sub-tasks to one or more of the plurality of Generative AI Large Language Model-based autonomous subagents based at least in part on the domain of the sub-task and the domain knowledge that the corresponding Generative AI Large Language Model-based autonomous subagent was trained.
- 14 . The system of claim 12 , wherein the Generative AI Large Language Model-based autonomous primary agent reports a confidence score in the classification of the alarm as a false alarm or a true alarm.
- 15 . The system of claim 12 , wherein the Generative AI Large Language Model-based autonomous primary agent reports a reasoning behind the classification of the alarm as a false alarm or a true alarm.
- 16 . The system of claim 12 , wherein one or more of the plurality of Generative Artificial Intelligence (AI) Large Language Model-based autonomous primary agents and/or one or more of the Generative AI Large Language Model-based autonomous subagents gathers data from one or more of the plurality of BMS components of the BMS, and the Generative AI Large Language Model-based autonomous primary agent reports gathered data that support the classification of the alarm as a false alarm or a true alarm.
- 17 . A non-transitory computer readable medium storing instructions that when executed by one or more processors causes the one or more processors to: receive a plurality of alarms from a BMS, wherein each of the plurality of alarms have an alarm type; for at least some of the plurality of alarms, activate a corresponding one of a plurality of Generative Artificial Intelligence (AI) Large Language Model-based autonomous primary agents based at least in part on the alarm type of the respective alarm, where the corresponding Generative AI Large Language Model-based autonomous primary agent is trained using domain knowledge that corresponds to the alarm type of the respective alarm; the corresponding Generative AI Large Language Model-based autonomous primary agent performs an initial analysis of the respective alarm and creates one or more plausible causes for the respective alarm, wherein each of the one or more plausible causes relates to one or more corresponding domains of a plurality of domains; the corresponding Generative AI Large Language Model-based autonomous primary agent autonomously assigns each of one or more of the plausible causes to one or more of a plurality of Generative AI Large Language Model-based autonomous subagents based at least in part on the one or more domains of the respective plausible cause, where each of the corresponding Generative AI Large Language Model-based autonomous subagent is trained using domain knowledge that corresponds to the respective domain of the plausible cause; each of the Generative AI Large Language Model-based autonomous subagents performs an analysis of the assigned plausible cause and returns a result back to the Generative AI Large Language Model-based autonomous primary agent that assigned the plausible cause to the respective Generative AI Large Language Model-based autonomous subagent; and the Generative AI Large Language Model-based autonomous primary agent receiving the result from each of the Generative AI Large Language Model-based autonomous subagents that were assigned a respective plausible cause from the Generative AI Large Language Model-based autonomous primary agent, and based at least in part on the received results, classifying the alarm as a false alarm or a true alarm.
- 18 . The non-transitory computer readable medium of claim 17 , wherein one or more of the Generative AI Large Language Model-based autonomous subagents, when performing the analysis of the assigned plausible cause, autonomously assigning one or more sub-tasks to one or more of the plurality of Generative AI Large Language Model-based autonomous subagents based at least in part on the domain of the sub-task and the domain knowledge that the corresponding Generative AI Large Language Model-based autonomous subagent was trained.
- 19 . The non-transitory computer readable medium of claim 17 , wherein the Generative AI Large Language Model-based autonomous primary agent reports a confidence score in the classification of the alarm as a false alarm or a true alarm.
- 20 . The non-transitory computer readable medium of claim 17 , wherein the Generative AI Large Language Model-based autonomous primary agent reports a reasoning behind the classification of the alarm as a false alarm or a true alarm.
Description
TECHNICAL FIELD The present disclosure relates generally to building management systems and more particularly to mitigating false alarms in a building management system. BACKGROUND Building Management Systems are systems that control and/or monitor a building or other facility. Building Management Systems may include, for example, an HVAC system, a security system, a video management system, an access control system, a fire system, and/or any other suitable Building Control System. In many cases, a Building Management System raises an alarm when an abnormality is detected in the building and/or an abnormality is detected in the operation of the Building Management System. The alarms must typically be acknowledged and/or otherwise addressed by an operator or other personnel of the building. In some cases, the Building Management System may issue an alarm indicating a potential issue or problem is occurring even though no such issue or problem is actually occurring in the building. These alarms can be referred to as false alarms. When a false alarm occurs, an operator typically needs to respond to the false alarm, which can waste considerable time of the operator and can pull the operator's attention away from actual true alarms. What would be desirable are methods and systems for automatically determining whether an alarm is a false alarm or a true alarm. SUMMARY The present disclosure relates generally to building management systems and more particularly to mitigating false alarms in a building management system. An example may be found in a method for processing of alarms of a Building Management System (BMS) of a facility, wherein the BMS includes a plurality of BMS components placed at known locations about the facility and the plurality of BMS components include a plurality of sensors. The illustrative method includes receiving a plurality of alarms from the BMS and normalizing each of the plurality of alarms into a normalized alarm format, wherein the normalized alarm format includes at least an alarm type and an alarm timestamp. For at least some of the plurality of alarms, a corresponding one of a plurality of Generative Artificial Intelligence (AI) Large Language Model-based autonomous primary agents is activated based at least in part on the alarm type of the respective alarm, where the corresponding Generative AI Large Language Model-based autonomous primary agent is trained using domain knowledge that corresponds to the alarm type of the respective alarm. The corresponding Generative AI Large Language Model-based autonomous primary agent performs an initial analysis of the respective alarm and creates one or more initial scenarios for determining whether the respective alarm is a false alarm or a true alarm, wherein each of the one or more initial scenario relates to one or more scenario domains. The corresponding Generative AI Large Language Model-based autonomous primary agent autonomously assigns each of one or more of the initial scenarios to one or more of a plurality of Generative AI Large Language Model-based autonomous subagents based at least in part on the one or more scenario domains of the respective initial scenario, where each of the corresponding Generative AI Large Language Model-based autonomous subagent is trained using domain knowledge that corresponds to the respective scenario domain. Each of the Generative AI Large Language Model-based autonomous subagents perform an analysis of the assigned initial scenario and return a result back to the Generative AI Large Language Model-based autonomous primary agent that assigned the initial scenario to the respective Generative AI Large Language Model-based autonomous subagent. The Generative AI Large Language Model-based autonomous primary agent receives the result from each of the Generative AI Large Language Model-based autonomous subagents that were assigned a respective initial scenario from the Generative AI Large Language Model-based autonomous primary agent, and based at least in part on the received results, classifies the alarm as a false alarm or a true alarm. Another example may be found in a system for alarm processing of alarms of a Building Management System (BMS) of a facility, wherein the BMS includes a plurality of BMS components placed at known locations about the facility and the plurality of BMS components include a plurality of sensors. The system includes an input/output and a controller that is operatively coupled to the input/output. The controller is configured to receive a plurality of alarms from the BMS via the input/output, wherein each of the plurality of alarms has an alarm type. For at least some of the plurality of alarms, the controller is configured to activate a corresponding one of a plurality of Generative Artificial Intelligence (AI) Large Language Model-based autonomous primary agents based at least in part on the alarm type of the respective alarm, where the corresponding Generative AI Large La