Search

US-12619226-B2 - Anomaly detection based on normal behavior modeling

US12619226B2US 12619226 B2US12619226 B2US 12619226B2US-12619226-B2

Abstract

A method of behavior monitoring includes determining, by one or more trained behavior models associated with a monitored asset, output data indicative of operation of the monitored asset. The method also includes determining a risk score based on the output data and determining feature importance data based on the output data. The method further includes determining whether to generate an alert based on the risk score and the feature importance data.

Inventors

  • Kevin Gullikson
  • James Robert Eskew
  • Uche Ohafia

Assignees

  • SparkCognition, Inc.

Dates

Publication Date
20260505
Application Date
20221012

Claims (20)

  1. 1 . A method of behavior monitoring, the method comprising: determining, by one or more trained behavior models associated with a monitored asset, output data indicative of operation of the monitored asset; determining one or more residual values based on a comparison of a predicted data sample of the output data to a corresponding input data value of input data; selectively masking out at least one residual value of the one or more residual values to generate masked residual data, wherein the at least one residual value is masked out responsive to the corresponding input data value being generated by a particular pre-processing operation; determining a risk score based on the masked residual data; determining feature importance data based on the output data; and determining whether to generate an alert based on the risk score and the feature importance data.
  2. 2 . The method of claim 1 , wherein the output data is determined based on input data including or based on sensor data from one or more sensors associated with the monitored asset.
  3. 3 . The method of claim 1 , wherein the one or more trained behavior models are configured to generate one or more predicted values based on sensor data from one or more sensors associated with the monitored asset.
  4. 4 . The method of claim 3 , wherein the one or more residual values are based on the one or more predicted values and sensor data.
  5. 5 . The method of claim 1 , further comprising: obtaining sensor data for one or more sensors associated with the monitored asset, wherein the sensor data includes multiple time series of data samples, each time series representing output of a single sensor; and performing one or more preprocessing operations to generate, based on the sensor data, input data for the one or more trained behavior models, wherein the one or more trained behavior models determine the output data based on the input data, the one or more preprocessing operations including the particular pre-processing operation.
  6. 6 . The method of claim 5 , wherein the masked residual data includes at least one second residual value associated with a second particular preprocessing operation distinct from the particular pre- processing operation.
  7. 7 . The method of claim 1 , wherein the at least one residual value is selectively masked out further based on a user configuration setting associated with a tolerance for false positive alerts.
  8. 8 . The method of claim 5 , wherein the one or more preprocessing operations includes one or more of: removing outlying data samples; removing data associated with particular events; denoising, imputation of one or more values; resampling data values; scaling data values; normalizing data values; determining one or more data values based on one or more other data values; or performing one or more domain transformations.
  9. 9 . The method of claim 1 , further comprising: concatenating the risk score for a particular feature and time step and the feature importance data for the particular feature and time step to generate concatenated data; and providing the concatenated data as input to an alert generation model to determine whether to generate the alert.
  10. 10 . The method of claim 9 , further comprising, performing, by the alert generation model, a sequential probability ratio test based on a set of anomaly scores and a set of reference anomaly scores, wherein an anomaly score of the set of anomaly scores includes the concatenated data.
  11. 11 . The method of claim 1 , further comprising, responsive to a determination to generate the alert generating, outputting an alert indication that includes the feature importance data.
  12. 12 . The method of claim 1 , wherein the one or more trained behavior models include one or more dimensional reduction models, one or more autoencoders, one or more time series predictors, one or more feature predictors, or a combination thereof.
  13. 13 . The method of claim 1 , further comprising: obtaining sensor data for one or more sensors associated with the monitored asset, wherein the sensor data indicate measurements of one or more physical characteristics, one or more electromagnetic characteristics, one or more radiologic characteristics, or a combination thereof, of the monitored asset; and providing input data based on the sensor data as input to the one or more trained behavior models, wherein the one or more trained behavior models determine the output data based on the input data.
  14. 14 . A computing device comprising: one or more memory devices storing instructions and one or more trained behavior models associated with a monitored asset; and one or more processors configured to execute the instructions to perform operations comprising: determining, using the one or more trained behavior models, output data indicative of operation of the monitored asset; determining one or more residual values based on a comparison of a predicted data sample of the output data to a corresponding input data value of input data; selectively masking out at least one residual value of the one or more residual values to generate masked residual data, wherein the at least one residual value is masked out responsive to the corresponding input data value being generated by a particular pre-processing operation; determining a risk score based on the masked residual data; determining feature importance data based on the output data; and determining whether to generate an alert based on the risk score and the feature importance data.
  15. 15 . The computing device of claim 14 , wherein the operations further comprise: obtaining sensor data for one or more sensors associated with the monitored asset, wherein the sensor data includes multiple time series of data samples, each time series representing output of a single sensor; and performing one or more preprocessing operations to generate, based on the sensor data, input data for the one or more trained behavior models, wherein the one or more trained behavior models determine the output data based on the input data, the one or more pre-processing operations including the particular preprocessing operation.
  16. 16 . The computing device of claim 15 , wherein the at least one residual value is selectively masked out based on a user configuration setting associated with a tolerance for false positive alerts.
  17. 17 . A computer-readable storage device storing instructions that are executable by one or more processors to cause the one or more processors to: determine, using one or more trained behavior models associated with a monitored asset, output data indicative of operation of the monitored asset; determine a risk score based on the output data; determine feature importance data based on the output data, wherein the feature importance data comprises a plurality of feature importance values, and wherein each feature importance value corresponds to a different feature of the output data; and determine whether to generate an alert based on the risk score and the feature importance data.
  18. 18 . The computer-readable storage device of claim 17 , wherein the feature importance data indicates a relative importance of each particular feature of the output data.
  19. 19 . The computer-readable storage device of claim 17 , wherein the feature importance data indicates a ranking of the corresponding features of the output data.
  20. 20 . The computer-readable storage device of claim 17 , wherein the feature importance data comprises a feature match score, the feature match score based on a difference between feature importance determined based on the output data and expected feature importance associated with the operation of the monitored asset.

Description

CROSS-REFERENCE TO RELATED APPLICATIONS The present application claims priority from U.S. Provisional Patent Application Ser. No. 63/255,155 entitled “ANOMALY DETECTION BASED ON NORMAL BEHAVIOR MODELING,” filed Oct. 13, 2021, the contents of which are incorporated herein by reference in their entirety. FIELD The present disclosure is generally related to using trained models to detect anomalous behavior based on normal behavior modeling. BACKGROUND Abnormal behavior can be detected using rules established by a subject matter expert or derived from physics-based models. However, it can be expensive and time consuming to properly establish and confirm such rules. The time and expense involved is compounded if the equipment or process being monitored has several normal operational states or if what behavior is considered normal changes from time to time. To illustrate, as equipment operates, the normal behavior of the equipment may change due to wear. It can be challenging to establish rules to monitor this type of gradual change in normal behavior. Further, in such situations, the equipment may occasionally undergo maintenance to offset the effects of the wear. Such maintenance can result in a sudden change in normal behavior, which is also challenging to monitor using established rules. SUMMARY The present disclosure describes systems and methods that enable use of trained machine learning models to detect anomalous behavior of monitored devices, systems, or processes. Such monitored devices, systems, or processes are collectively referred to herein as “assets” for ease of reference. In some implementations, the models are automatically generated and trained based on historic data. In some aspects, a method of behavior monitoring includes receiving sensor data from one or more sensors associated with a monitored asset and providing input data to one or more behavior models to generate an anomaly score. The one or more behavior models include at least one trained model. The method also includes determining whether to generate an alert based on the anomaly score. In some aspects, a system for behavior monitoring includes one or more processors configured to receive sensor data from one or more sensors associated with a monitored asset and to provide input data to one or more behavior models to generate an anomaly score. The one or more behavior models include at least one trained model. The one or more processors are further configured to determine whether to generate an alert based on the anomaly score. In some aspects, a computer-readable storage device stores instructions. The instructions, when executed by one or more processors, cause the one or more processors to receive sensor data from one or more sensors associated with a monitored asset and provide input data to one or more behavior models to generate an anomaly score. The one or more behavior models include at least one trained model. The instructions further cause the one or more processors to determine whether to generate an alert based on the anomaly score. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a diagram illustrating particular aspects of operations to detect anomalous behavior of a monitored asset in accordance with some examples of the present disclosure. FIG. 2 is a block diagram illustrating a particular implementation of a system that may perform the operations of FIG. 1. FIG. 3 is a block diagram of components that may be included in the system of FIG. 2 in accordance with some examples of the present disclosure. FIG. 4 is a block diagram illustrating particular aspects of operations to generate the anomaly detection model of FIG. 2 in accordance with some examples of the present disclosure. FIG. 5 is another block diagram illustrating particular aspects of operations to generate the anomaly detection model of FIG. 2 in accordance with some examples of the present disclosure. FIG. 6 is a depiction of a graphical user interface that may be generated by the system of FIG. 2 in accordance with some examples of the present disclosure. FIG. 7 is a flow chart of a first example of a method of behavior monitoring that may be implemented by the system of FIG. 2. FIG. 8 is a flow chart of a second example of a method of behavior monitoring that may be implemented by the system of FIG. 2. FIG. 9 is a flow chart of an example of a method of training one or more models of the system of FIG. 2. FIG. 10 illustrates an example of a computer system corresponding to, including, or included within the system of FIG. 2 according to particular implementations. DETAILED DESCRIPTION Systems and methods are described that enable automatic generation of anomaly detection models for monitored assets. Additionally, the systems and methods disclosed herein enable monitoring of assets to detect anomalous behavior. For example, the anomalous behavior may be indicative of an impending failure of the asset, and the systems and methods disclosed herein may facilitat