Search

EP-4738161-A1 - COMBATING SOCIAL ENGINEERING AI SURVEILLANCE MEANS TRAINED ON THREAT KNOWLEDGE AND ATTACK PATTERNS

EP4738161A1EP 4738161 A1EP4738161 A1EP 4738161A1EP-4738161-A1

Abstract

A method and system for ensuring trusted communication and protection from social engineering attacks on a user equipment (UE) in a communication system, the method comprising: • Implementing an AI-powered monitoring module on the user equipment or in a cloud environment, the AI-powered monitoring module being trained on social engineering patterns and/or fraud patterns; • Providing communication content related to the user interaction with his UE to the AI-powered monitoring module, wherein the communication content is content from the user and/or content of a third party, wherein the UE is configured to capture the content of the third party; • Analyzing by the AI-powered monitoring module the communication content and evaluating a threat-score of the communication content; • Warning the user, in particular in real-time, if the threat-score is higher than a security-threshold.

Inventors

  • Jepsen, Kathrin
  • HABERKORN, Günter

Assignees

  • Deutsche Telekom AG

Dates

Publication Date
20260506
Application Date
20241030

Claims (15)

  1. A method for ensuring trusted communication and protection from social engineering attacks on a user equipment (UE) in a communication system, the method comprising: • Step (305): Implementing an AI-powered monitoring module (130) on the user equipment (110) or in a cloud environment (125), the AI-powered monitoring module being trained on social engineering patterns and/or fraud patterns; • Step (310): Providing communication content related to the user interaction with his UE to the AI-powered monitoring module, wherein the communication content is content from the user and/or content of a third party, wherein the UE is configured to capture the content of the third party and/or the user; • Step (315): Analyzing by the AI-powered monitoring module the communication content and evaluating a threat-score of the communication content; • Step (320): Warning the user, in particular in real-time, if the threat-score is higher than a security-threshold.
  2. The method of claim 1, wherein the AI-powered monitoring module is trained on threat intelligence datasets that include: • Social engineering attack patterns, Phishing techniques, Fraud detection patterns, Deepfake audio, video, and chat detection, and/or Indicators of suspicious behavior in communication channels such as email, messaging, and voice calls.
  3. The method of any of the claims 1 to 2, wherein the communication content includes content from: • Emails, SMS, iMessages, Phone calls, Video conferences, Web browsing sessions, and/or Interactions with chat bots.
  4. The method of any of the claims 1 to 3, wherein the AI-powered monitoring module is further configured to interact with security tools on the UE, such as antivirus software, firewalls, or anti-phishing systems, to gather additional indicators for evaluating the threat-score of the communication content.
  5. The method of any of the claims 1 to 4, further comprising adjusting the security-threshold dynamically based on the user's behavior patterns, context of communication, or the evolving threat landscape, thereby refining the accuracy of the warnings provided to the user.
  6. The method of any of the claims 1 to 5, wherein the warning provided to the user includes recommended actions such as: • Blocking a suspicious interaction, Verifying the identity of the third party, and/or Updating security settings to mitigate future threats.
  7. The method of any of the claims 1 to 6, further comprising implementing an encrypted communication channel between the AI-powered monitoring module and the cloud environment, ensuring that sensitive user data remains protected during the analysis and threat evaluation process.
  8. The method of any of the claims 1 to 7, wherein the AI-powered monitoring module continuously refines its threat detection algorithms by incorporating feedback from user interactions, such as accepting or rejecting warnings, into its machine learning model.
  9. The method of any of the claims 1 to 8, wherein the AI-powered monitoring module is configured to provide alerts via different communication channels based on the severity of the threat detected and the user's preferences.
  10. The method of any of the claims 1 to 9, further comprising: • Implementing an AI-powered filtering module on the user equipment, the AI-powered filtering module being trained to detect user interactions related to high-risk activities, in particular the AI-powered filtering module can be part of an AI Assistant 150 of the UE 110; • Providing only communication content related to critical interactions to the AI-powered monitoring module, in particular to the AI-powered monitoring module implemented in the cloud environment, when such high-risk interactions are detected by the AI-powered filtering module.
  11. The method of claim 10, wherein the AI-powered filtering module provides the communication content to the AI-powered monitoring module implemented in the cloud environment, only when the AI-powered filtering module detects user interactions related to high-risk activities, thereby minimizing network and device resource usage during low-risk interactions.
  12. The method of claim 11, wherein the AI-powered filtering module is trained on user interaction patterns that comprise but are not limited to: • Payment transactions, • Multi-factor authentication processes, and/or • Entry of sensitive personal data.
  13. The method of claim 11, wherein the AI-powered filtering module optimizes resource usage by maintaining the AI-powered monitoring module in an inactive state during non-critical interactions, and only activating it when high-risk user interactions are detected, with the monitoring module executing in a cloud environment to reduce on-device processing.
  14. A communication system comprising: • a communication network; • User equipment (UE) configured to interact with a communication network, configured to communicate with a third party, wherein the UE and/or the communication network are configured to capture communication content originating from the user and/or the third party, wherein the captured communication content is provided to an AI-powered monitoring module for threat evaluation; • A communication interface between the UE and a cloud environment for transmitting communication content related to user interactions to the AI-powered monitoring module; • An AI-powered monitoring module implemented on the UE or in the cloud environment on a server of the network provider, the AI-powered monitoring module being configured to analyze the communication content and evaluate a threat-score based on patterns associated with social engineering attacks and/or fraud; • A notification system configured to provide real-time warnings to the user through the UE and/or another device associated to the user if the threat-score of the analyzed communication content exceeds a predefined security threshold.
  15. The communication system of claim 14, further comprising: • an AI-powered filtering module implemented on the UE - in particular the AI-powered filtering module can be part of an AI Assistant 150 of the UE 110 - the AI-powered filtering module is being configured to detect high-risk user interactions related to activities including, but not limited to, financial transactions and sensitive data exchanges, and selectively trigger the AI-powered monitoring module based on the detection of such high-risk interactions.

Description

The invention relates to the field of cybersecurity, specifically to methods and systems for protecting a user communicating with a user equipment (UE) from social engineering attacks using AI-powered monitoring. The system is designed to identify and mitigate threats by analyzing communication patterns across various devices and platforms, such as mobile devices, smart home systems, and IoT networks. The invention addresses the challenges posed by evolving social engineering techniques, including phishing, deepfakes, and other fraudulent activities. In today's world of internet-based communication and mobile connectivity, the threat landscape has become increasingly complex. Advanced security measures are implemented across IT, networking infrastructure, and endpoint devices to address cyber threats, but attackers continue to exploit the human element as a weak link, particularly through social engineering tactics. Social engineering attacks, such as phishing and caller ID spoofing, manipulate users into divulging sensitive information or performing actions that compromise system security. Attackers often employ techniques such as fake corporate logos, spoofed email signatures, and phishing websites to build trust and increase the success rate of their attacks. Recently, the use of deepfake technologies to imitate legitimate communications has further complicated the situation, leaving users more vulnerable. Although legislative efforts, such as the German Telecommunications Act (TKG) and the EU's PSD3 regulation, have introduced minimum standards for combating these "trust elements" in fraudulent attacks, they are not sufficient to address the growing sophistication of cybercrime. Moreover, efforts to raise cybersecurity awareness among users have largely failed due to the diverse range of customer mindsets and their differing levels of technical literacy. A significant challenge remains in the enablement and empowerment of end-users. To truly reduce the effectiveness of social engineering, users need tools that not only inform them about threats but also provide real-time guidance on how to react. Current systems are inadequate in delivering personalized support and enabling users to act quickly and effectively when under attack. Enablement and Empowerment of Users in the Context of Cybercrime In the effort to enable and empower users to defend against cybercrime, several key challenges arise: 1. Which communication channel is most effective for reaching customers? In today's digital world, which channel is closest to a particular customer's daily interactions?2. How can attention be drawn to cybersecurity risks when customers perceive no immediate threat and see no need for action? How can support be provided in the moment of a potential threat?3. How can information be tailored to specific target groups? Given the complexity of cybersecurity, how can support be delivered in a way that is understandable to users? How can communication be made specific and targeted, considering the various entry points (devices, configurations, accounts)?4. How can the transfer to actionable steps be achieved? For instance, how can users be guided to configure security settings or verify digital content in a way that is comprehensible to them and enables sustainable learning and knowledge transfer?5. How can enablement and empowerment be made accessible to all customers, considering the need for digital inclusion? This includes ensuring accessibility and the preparation of information in simple, user-selectable language.6. How can users be supported contextually, given the constantly changing tactics of attackers, ensuring that assistance is provided precisely when needed? International observations and expert opinions suggest that the threat landscape and the use of such methods will continue to expand. Current industry reports indicate that, for the time being, companies and their customers benefit from the fact that large-scale attacks have not yet occurred. In other words, businesses and their customers learn from each attack-whether successful or not-and use that knowledge to improve defenses. However, the future risk lies not only in the volume of attacks, but also in the simultaneity of multiple attacks, a scenario enabled by the increasing use of artificial intelligence (AI). Another challenge is the lack of contextual support. When users encounter potential threats, existing solutions do not provide timely, context-sensitive assistance to help users make informed decisions, particularly when multiple entry points, such as email, messaging apps, and web browsers, are involved. Moreover, users often struggle with complex security configurations, making it difficult to understand or act on warnings from security systems. It is the object of the invention to at least partially overcome the drawbacks of the current state of the art, particularly the limitations of existing cybersecurity measures in addressing social