Search

US-12627691-B2 - Automated detection of computing system and network activity anomalies using denoising diffusion probabilistic models

US12627691B2US 12627691 B2US12627691 B2US 12627691B2US-12627691-B2

Abstract

Systems and methods for detecting security anomalies in a computing environment. One example system includes an electronic processor configured to receive, via the communication interface, security data for the computing environment and parse the security data to extract a feature set. The electronic processor is configured to apply noise to the feature set to produce a noised feature set and to produce a reduced noise feature set by processing the noised feature set using a neural network trained to remove noise. The electronic processor is configured to compare the reduced noise feature set to the feature set to determine a success score, select a threshold based on the security data, and determine whether the success score exceeds the threshold. The electronic processor is configured to, responsive to determining that the success score does not exceed the threshold, generate a security event based on the security data.

Inventors

  • Pranshu Bajpai
  • Eamon Bracht
  • Cyril Jazra

Assignees

  • MOTOROLA SOLUTIONS, INC.

Dates

Publication Date
20260512
Application Date
20240426

Claims (20)

  1. 1 . A system for detecting security anomalies in a computing environment, the system including: a communication interface; a memory; and an electronic processor communicatively coupled to the communication interface; wherein the electronic processor is configured to retrieve, from the memory, executable instructions that, when executed by the electronic processor, cause the electronic processor to: receive, via the communication interface, security data for the computing environment; parse the security data to extract a feature set representative of the security data; apply noise to the feature set to produce a noised feature set; produce a reduced noise feature set by processing the noised feature set using a neural network trained to remove noise; compare the reduced noise feature set to the feature set to determine a success score; select a threshold based on the security data; determine whether the success score exceeds the threshold; and responsive to determining that the success score does not exceed the threshold, generate a security event based on the security data.
  2. 2 . The system of claim 1 , wherein the executable instructions further cause the electronic processor to: determine a category for the security data; extract the feature set representative of the security data based on the category; and select the threshold based on the category.
  3. 3 . The system of claim 2 , wherein the executable instructions further cause the electronic processor to: receive, with a calibration engine, a validation report for the security event; and determine, with the calibration engine, an adjusted threshold for the category based on the validation report.
  4. 4 . The system of claim 3 , wherein the executable instructions further cause the electronic processor to: determine, with the calibration engine, an adjusted feature set for the category based on the validation report.
  5. 5 . The system of claim 2 , wherein the category is one selected from a group consisting of network activity, process activity, memory activity, and file activity.
  6. 6 . The system of claim 2 , wherein the category is network activity and the feature set includes at least one selected from a group consisting of a source address, a destination address, an application type, a payload size, a payload type, a network layer, and a timestamp.
  7. 7 . The system of claim 2 , wherein the category is process activity and the feature set includes at least one selected from a group consisting of a process name, a CPU usage level, a memory usage level, a network activity level, a process parentage, a file access pattern, and a timestamp.
  8. 8 . The system of claim 2 , wherein the category is memory activity and the feature set includes at least one selected from a group consisting of an allocation, an access pattern, a process location, injected code, and a timestamp.
  9. 9 . The system of claim 2 , wherein the category is file activity and the feature set includes at least one selected from a group consisting of a filename extension, a size change, an access pattern, a modification pattern, a location, metadata, an entropy change, and a timestamp.
  10. 10 . The system of claim 1 , wherein the executable instructions further cause the electronic processor to: compare the reduced noise feature set to the feature set to determine a raw success score; and normalize the raw success score to determine the success score.
  11. 11 . The system of claim 1 , wherein the executable instructions further cause the electronic processor to: responsive to determining that the success score does not exceed the threshold, analyze the feature set with an outlier detection algorithm to generate a plurality of anomaly scores, each of the anomaly scores associated with an individual feature of the feature set; and label the security data as anomalous based on the plurality of anomaly scores.
  12. 12 . The system of claim 1 , wherein the neural network includes a denoising diffusion probabilistic model including a multilayer perceptron.
  13. 13 . The system of claim 12 , wherein the electronic processor is configured to train the neural network to remove noise by, for each of a plurality of expected events representing a nominal operating state for the computing environment: (a) parsing the expected event to extract an expected feature set describing the expected event; (b) applying a noise value to the expected feature set to produce a noised expected feature set; (c) producing a reduced noise expected feature set by processing the noised expected feature set using the multilayer perceptron; and (d) comparing the reduced noise expected feature set to the expected feature set to determine a reconstruction loss.
  14. 14 . The system of claim 13 , wherein the electronic processor is configured to, for each of the plurality of expected events: repeat steps (a)-(d) for a quantity of passes, wherein for each pass of the quantity of passes, the noise value is higher than the noise value of a previous pass.
  15. 15 . A method for detecting and scoring security anomalies in a computing environment, the method including: receiving security data for the computing environment; parsing, with an electronic processor, the security data to extract a feature set representative of the security data; applying, with the electronic processor, noise to the feature set to produce a noised feature set; producing a reduced noise feature set by processing, with the electronic processor, the noised feature set using a neural network trained to remove noise; comparing the reduced noise feature set to the feature set to determine a success score; selecting a threshold based on the security data; determining whether the success score exceeds the threshold; and responsive to determining that the success score does not exceed the threshold, generate a security event based on the security data.
  16. 16 . The method of claim 15 , further comprising: determining a category for the security data; wherein extracting the feature set representative of the security data includes extracting the feature set representative of the security data based on the category; and selecting the threshold includes selecting the threshold based on the category.
  17. 17 . The method of claim 16 , further comprising: receiving, with a calibration engine, a validation report for the security event; and determining, with the calibration engine, an adjusted threshold for the category based on the validation report.
  18. 18 . The method of claim 17 , further comprising: determining, with the calibration engine, an adjusted feature set for the category based on the validation report.
  19. 19 . The method of claim 16 , wherein the category is one selected from a group consisting of network activity, process activity, memory activity, and file activity.
  20. 20 . A method for detecting and scoring security anomalies in a computing environment, the method comprising: applying noise to a feature set for a network traffic event to produce a noised feature set; producing a reduced noise feature set by processing the noised feature set using a neural network trained to remove noise; comparing the reduced noise feature set to the feature set to determine a success score; determining whether the success score exceeds a threshold; and responsive to determining that the success score does not exceed the threshold, labeling the network traffic event as anomalous.

Description

CROSS-REFERENCE TO RELATED APPLICATIONS The present application is a continuation-in-part of the co-pending U.S. patent application Ser. No. 17/530,255, filed Nov. 18, 2021, titled “Automated Detection of Computing System and Network Activity Anomalies Using Denoising Diffusion Probabilistic Models.” BACKGROUND OF THE INVENTION Public safety agencies and corporate enterprises increasingly rely on network and software systems infrastructure. Officers and other employees use, among other things, mobile electronic devices to remotely access mission critical software applications and other services. Remote access capability is necessary for the agencies and enterprises to effectively operate. However, opening a network to remote access can open the network to potential misuse by malicious parties. Such misuse can result in data breaches, communications breakdowns, reduced system performance, and other problems. Consequently, network and software systems infrastructure are secured by using, for example, access control policies, encryption, firewalls, network segmentation, anti-virus software, and the like. In addition, Intrusion detection systems (IDS) are vital for protecting public safety and corporate infrastructure. BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS In the accompanying figures similar or the same reference numerals may be repeated to indicate corresponding or analogous elements. These figures, together with the detailed description, below are incorporated in and form part of the specification and serve to further illustrate various embodiments of concepts that include the claimed invention, and to explain various principles and advantages of those embodiments. FIG. 1 is a block diagram of a network security anomaly detection system, according to some examples. FIG. 2 schematically illustrates an anomaly detector included in the system of FIG. 1, according to some examples. FIG. 3 is a flowchart of a method for detecting security anomalies in a computing environment, according to some examples. FIG. 4 illustrates aspects of the operation of a neural network executed by the system of FIG. 1, according to some examples. FIG. 5 is a block diagram illustrating aspects of the operation of the system of FIG. 1, according to some examples. FIG. 6 is a flowchart of a method for training a neural network executed by the system of FIG. 1 in accordance with some embodiments. FIG. 7 is a block diagram of an anomaly detection system for a computing environment, according to some examples. FIG. 8 schematically illustrates an anomaly detector included in the system of FIG. 7, according to some examples. FIG. 9 is a flowchart of a method for detecting anomalies in a computing environment, according to some examples. FIG. 10 is a block diagram illustrating aspects of the operation of the system of FIG. 7, according to some examples. FIG. 11 is a flowchart of a method for training a neural network executed by the system of FIG. 7, according to some examples. Skilled artisans will appreciate that elements in the figures are illustrated for simplicity and clarity and have not necessarily been drawn to scale. For example, the dimensions of some of the elements in the figures may be exaggerated relative to other elements to help improve understanding of embodiments of the present disclosure. The system, apparatus, and method components have been represented where appropriate by conventional symbols in the drawings, showing only those specific details that are pertinent to understanding the embodiments of the present disclosure so as not to obscure the disclosure with details that will be readily apparent to those of ordinary skill in the art having the benefit of the description herein. DETAILED DESCRIPTION OF THE INVENTION Complex computing environments are vital tools used by public safety agencies and other public and private sector entities. For example, a police department may use software and other computing technology to, among other things, plan its patrol and other operations, provide communications to first responders and other personnel, receive incident reports from the public, dispatch and coordinate incident response, perform incident investigation, catalog evidence and other records, provide video security and access control systems, and evaluate agency effectiveness. Corporate entities similarly rely on network and computing infrastructure to perform or aid in the performance of many or all aspects of their business operations. Such computing environments, including cloud-based computing environments, interconnect a myriad of stationary and portable computing devices to provide software applications and other services to personnel operating remotely. For example, in a public safety agency, each employee may carry and operate at least one portable computing device, such as a smart phone, which authenticates to the computing environment to provide its user with access, applications, and services.