Search

US-20260129050-A1 - HUMAN AND NON-HUMAN IDENTITY RISK EXPOSURE ANALYSIS

US20260129050A1US 20260129050 A1US20260129050 A1US 20260129050A1US-20260129050-A1

Abstract

Risk identification and management approaches are described. Real time risk identification and mitigation capabilities enable the provision and deprovision of access to various resources. This capability incorporates Just-In-Time (JIT) and Just-Enough-Access (JEA) elevations to adjust access permissions in real-time based on the risk landscape. In some embodiments, the risk landscape may include human and non-human users.

Inventors

  • Pushkar Saraf

Assignees

  • MICROSOFT TECHNOLOGY LICENSING, LLC

Dates

Publication Date
20260507
Application Date
20241107

Claims (20)

  1. 1 . A method, comprising: receiving, from a device of a first user, a request for elevated access to a resource; detecting, an instance of an anomalous behavior caused by the first user; based at least in part on the anomalous behavior, assigning, by the processor, a risk score to the first user; providing, to a device of a second user, the request for elevated access to the resource, and a notification indicating the instance of the anomalous behavior; receiving, an indication granting the request for elevated access to the resource; and adjusting, an access level of the first user to the resource in accordance with the request.
  2. 2 . The method of claim 1 , further comprising: determining the risk score of the first user satisfies a threshold; providing, by the processor, to the device of the second user, the request for elevated access and an alert indicating at least the instance of anomalous behavior.
  3. 3 . The method of claim 1 , wherein assigning the risk score to the user further comprises: generating a risk likelihood score based on one or more historical records associated with the first user; generating a risk confidence score based at least in part on the risk likelihood score; and associating the risk likelihood score and the risk confidence score to the risk score.
  4. 4 . The method of claim 1 , further comprising: after adjusting the access level of the first user to the resource: monitoring the behavior of the first user; and detecting a second instance of an anomalous behavior caused by the first user; based on at least the second instance, updating the risk score of the first user.
  5. 5 . The method of claim 4 , further comprising: after updating the risk score of the first user, adjusting the access level of the first user.
  6. 6 . The method of claim 5 , further comprising revoking access to the resource.
  7. 7 . The method of claim 5 , further comprising: adjusting, by the processor, the access level of the first user to a basic level of access.
  8. 8 . The method of claim 1 , further comprises: receiving an indication that the first user is accessing a second resource distinct from the first resource; detecting a second instance of anomalous behavior caused by the first user while accessing the second resource; updating the risk score of the first user based on the detected second instance; and providing, to the device of the second user, an indication that the first user's risk score has changed.
  9. 9 . The method of claim 8 , further comprising: receiving, by the processor, an indication to revoke the first user's access to the first resource; and revoking the first user's access to the first resource.
  10. 10 . The method of claim 9 , further comprising revoking the first user's access to the second resource.
  11. 11 . The method of claim 1 , wherein the notification indicating the instance of the anomalous behavior includes a report indicating the context of at least the instance of the anomalous behavior caused by the first user, wherein the context includes a location of the device of the first user or an access log of the device of the first user.
  12. 12 . The method of claim 1 , further comprising: after adjusting the access level of the first user: providing, to the device of the first user, an indication that the request for elevated access has been approved.
  13. 13 . A system, comprising: one or more processors; memory storing one or more programs that are configured to be executed by the one or more processors, the one or more programs including instructions for: receiving, by a processor, from a first user, a request for elevated access to a resource; detecting, by the processor, an instance of an anomalous behavior caused by the first user; based at least in part on the anomalous behavior, assigning, by the processor, a risk score to the first user; in accordance with a determination that the risk score of the user satisfies a threshold: providing, by the processor, to a second user, the request for elevated access to the resource, and a notification indicating the instance of the anomalous behavior; receiving, by the processor, from the second user, an indication granting the request for elevated access to the resource; adjusting, by the processor, the access level of the first user in accordance with the request; and providing, to the device of the first user, an indication that the request for elevated access to the resource has been granted.
  14. 14 . The system of claim 13 , further comprising: determining the risk score of the first user satisfies a threshold; providing, by the processor, to the device of the second user, the request for elevated access and an alert indicating at least the instance of anomalous behavior.
  15. 15 . The system of claim 14 , further comprising: after adjusting the access level of the first user to the resource: monitoring the behavior of the first user; detecting a second instance of an anomalous behavior caused by the first user; and based on at least the second instance, updating the risk score of the first user.
  16. 16 . The system of claim 14 , further comprising: receiving an indication that the first user is accessing a second resource distinct from the first resource; detecting a second instance of anomalous behavior caused by the first user while accessing the second resource; updating the risk score of the first user based on the detected second instance; and providing, to the device of the second user, an indication that the first user's risk score has changed.
  17. 17 . The system of claim 16 , further comprises revoking the first user's access to the second resource.
  18. 18 . A non-transitory computer-readable storage medium storing executable instructions that, when executed by an electronic device, cause the electronic device to: receive, from a first user, a request for elevated access to a resource; detect an instance of an anomalous behavior caused by the first user; based at least in part on the anomalous behavior, assign a risk score to the first user; in accordance with a determination that the risk score of the user satisfies a threshold: provide, to a second user, the request for elevated access to the resource, and a notification indicating the instance of the anomalous behavior; receive from the second user, an indication granting the request for elevated access to the resource; adjust the access level of the first user in accordance with the request; and monitor the access of the first user for additional instances of anomalous behavior.
  19. 19 . The computer-readable storage medium of claim 18 , wherein the instructions to assign the risk score to the user further comprises: generating a risk likelihood score based on one or more historical records associated with the first user; generating a risk confidence score based at least in part on the risk likelihood score; and associating the risk likelihood score and the risk confidence score to the risk score.
  20. 20 . The computer-readable storage medium of claim 18 , the instructions further comprising generating a notification indicating the instance of the anomalous behavior, the notification including a report indicating a context of at least the instance of the anomalous behavior caused by the first user, wherein the context includes a location of the device of the first user or an access log of the device of the first user.

Description

BACKGROUND 1. Field This disclosure relates generally to distributed computing systems and identity risk management across organizations. 2. Background Distributed computing systems employ a zero-trust model, presuming threats materialize from both internal and external sources. To combat such threats, the zero-trust model employs strict access controls and continuous verification. Multi-factor authentication and identity-based authentication policies are used to manage access across distributed components to ensure verified users and systems can interact with sensitive resources. AI and machine learning is used to identify patterns and preempt attacks in complex networks. SUMMARY A risk-scoring model that evaluates both human and non-human (e.g., computing devices, servers) identities are proposed herein. The disclosure provides a proactive framework specifically designed to preemptively identify and mitigate threats working in conjunction with authentication technologies such as secure token issuance (STI) and enterprise security token service (ESTS) to implement a risk-based access control system. This control system can be easily deployed throughout an entire enterprise, across multiple organizations and various teams. Additionally by embedding artificial intelligence and machine learning models directly into the process, the solution follows zero-trust principles in the goal of achieving least privileged access (LPA). Distributed computing systems include various discrete hardware and software components that operate in conjunction to provide a variety of functionalities to clients of the distributed computing systems. Users may access various aspects within a distributed computing system according to the user's access levels and/or credentials. Patterns and behaviors of users and entities are analyzed to identify abnormal and potentially dangerous behavior. Such pattern profiles are learned over time and across various peer groupings such as similarly situation teams within an organization. By analyzing behavior patterns and identifying risks, the identity risk exposure analysis system described herein can provide legitimate and authenticated identities with granted or elevated access when necessary. Any unusual activity can be quickly detected and addressed appropriately. Anomalous behavior refers to unusual or unexpected activities undertaken by a user (either human or non-human) that deviates from established norms or baseline behavior within a network or system. Identification of these types of unusual activities leads to detection of potential security threats because such anomalous behaviors may signal suspicious or malicious activity. The methods and systems described herein monitor network activity to identify suspicious anomalous behavior that diverge from expected patterns of usage which may potentially signal unauthorized access, malicious activity, or security vulnerabilities. Constant, real-time monitoring may be achieved by implanting machine learning and behavioral analytics to establish a baseline by reviewing historical data and real-time data to flag deviations (anomalous behaviors or activities) for investigation. Examples of anomalous behaviors may include but are not limited to: multiple concurrent elevation requests (e.g., sending several requests for elevated access within the same minute), submission of elevation requests outside of a typical working schedule of a user (e.g., at 4 am), submission of elevation requests to access resources that is outside of the scope of the responsibilities associated with the user (e.g., an engineer requesting access to HR payroll tools), installation or execution of a new or unusual process that is not part of typical operations, large data transfers, and high network traffic from a single IP address. Other anomalous behaviors are expected to occur yet are too numerous to list. The scope of this disclosure is understood to encompass all types of unwanted, unexpected, and undesirable behaviors that may impact the security of the computing system. The following is a non-exhaustive listing of some aspects of the present techniques. These and other aspects are described in the following disclosure. Some aspects include application of a human and non-human identity risk exposure analysis. The analysis may provide recommendations for adjusting various access levels enjoyed by humans and non-humans to resources within an organization. The analysis may also be programmed to take autonomous and/or preventative measures without human interference. A method for providing human and non-human identity risk exposure analysis may include receiving, by a processor, from a device of a first user, a request for elevated access to a resource; detecting, by the processor, an instance of an anomalous behavior caused by the first user; based at least in part on the anomalous behavior, assigning, by the processor, a risk score to the first user; providing, by the pr