Search

US-12626261-B2 - System and method for automated scam detection

US12626261B2US 12626261 B2US12626261 B2US 12626261B2US-12626261-B2

Abstract

A system for verifying an identity of a counterparty includes one or more processors and a memory storing instructions that, when executed, cause the system to receive identifying information from a user, transmit a consent prompt to the counterparty via an associated communication channel, receive consent and additional data including a full name and approximate location, determine whether the name is historically associated with the identifying information, analyze whether the identifying information corresponds to a temporary account based on service provider characteristics, correlate a device fingerprint of the counterparty's device with historical records linked to the identifying information, generate a scam risk score based on results of the determining and analyzing operations, and present the scam risk score and one or more risk-based recommendations to the user through a privacy-preserving interface.

Inventors

  • Alphonse Pascual
  • John G. Evans, JR.

Assignees

  • Scamnetic Inc.

Dates

Publication Date
20260512
Application Date
20250507

Claims (18)

  1. 1 . A system for verifying an identity of a counterparty, the system comprising: one or more processors; and memory storing instructions that, when executed by the one or more processors, cause the system to perform operations comprising: receiving identifying information for the counterparty provided by a user, the identifying information comprising at least one of an email address or a phone number; transmitting a consent prompt to the counterparty via a communication channel associated with the identifying information wherein the consent prompt comprises a prompt for the counterparty to consent in a identification validation process initiated by the user; receiving, from the counterparty, consent to participate in an identity verification process and additional identity data comprising a full name and an approximate geographic location; determining whether the full name is historically associated with the identifying information; determining whether the identifying information is associated with a temporary account by identifying a service provider associated with the identifying information and querying a status of the account; analyzing a device fingerprint of a device used by the counterparty and correlating the fingerprint with historical records linked to the identifying information; generating a scam risk score based on results of the determining and analyzing operations; and presenting the scam risk score and one or more recommendations to the user based on the scam risk score; wherein the scam risk score is calculated by aggregating weighted values assigned to results of individual verification operations.
  2. 2 . The system of claim 1 , wherein the memory further stores instructions that, when executed, cause the system to perform operations comprising: in response to an inconclusive result from the determining or analyzing operations, requesting a secondary communication identifier from the counterparty; and repeating the determining and analyzing operations with respect to the secondary communication identifier.
  3. 3 . The system of claim 2 , wherein the secondary communication identifier comprises a phone number and the initially submitted identifying information comprises an email address.
  4. 4 . The system of claim 2 , wherein the secondary communication identifier comprises an email address and the initially submitted identifying information comprises a phone number.
  5. 5 . The system of claim 2 , wherein the scam risk score is updated based on results of the determining and analyzing operations performed on the secondary communication identifier.
  6. 6 . The system of claim 2 , wherein the memory further stores instructions that, when executed, cause the system to perform operations comprising: in response to an inconclusive result after analysis of both the primary and secondary communication identifiers, requesting a government-issued identity document and a live facial image of the counterparty.
  7. 7 . The system of claim 6 , wherein the memory further stores instructions for: comparing the live facial image to an image extracted from the government-issued identity document; detecting anomalies in the live facial image indicative of manipulation; and evaluating formatting characteristics of the government-issued identity document for conformity with document standards of an issuing authority.
  8. 8 . The system of claim 7 , wherein the scam risk score is further based on results of the comparing, detecting, and evaluating operations performed on the live facial image and the identity document.
  9. 9 . The system of claim 1 , wherein the device fingerprint comprises one or more of: a hardware identifier, an operating system attribute, and a geolocation indicator.
  10. 10 . A method for verifying an identity of a counterparty, the method comprising: receiving, by a system comprising one or more processors and a memory, identifying information for the counterparty provided by a user, the identifying information comprising at least one of an email address or a phone number; transmitting a consent prompt to the counterparty via a communication channel associated with the identifying information wherein the consent prompt comprises a prompt for the counterparty to consent in a identification validation process initiated by the user; receiving, from the counterparty, consent to participate in an identity verification process and additional identity data comprising a full name and an approximate geographic location; determining whether the full name is historically associated with the identifying information; determining whether the identifying information is associated with a temporary account by identifying a service provider associated with the identifying information and querying a status of the account; analyzing a device fingerprint of a device used by the counterparty and correlating the fingerprint with historical records linked to the identifying information; generating a scam risk score based on results of the determining and analyzing operations; and presenting the scam risk score and one or more recommendations to the user based on the scam risk score; wherein the scam risk score is calculated by aggregating weighted values assigned to results of individual verification operations.
  11. 11 . The method of claim 10 , further comprising: in response to an inconclusive result from the determining or analyzing operations, requesting a secondary communication identifier from the counterparty; and repeating the determining and analyzing operations with respect to the secondary communication identifier.
  12. 12 . The method of claim 11 , wherein the secondary communication identifier comprises a phone number and the initially submitted identifying information comprises an email address.
  13. 13 . The method of claim 11 , wherein the secondary communication identifier comprises an email address and the initially submitted identifying information comprises a phone number.
  14. 14 . The method of claim 11 , further comprising updating the scam risk score based on results of the determining and analyzing operations performed on the secondary communication identifier.
  15. 15 . The method of claim 11 , further comprising: in response to an inconclusive result after analysis of both the primary and secondary communication identifiers, requesting a government-issued identity document and a live facial image of the counterparty.
  16. 16 . The method of claim 15 , further comprising: comparing the live facial image to an image extracted from the government-issued identity document; detecting anomalies in the live facial image indicative of manipulation; and evaluating formatting characteristics of the government-issued identity document for conformity with document standards of an issuing authority.
  17. 17 . The method of claim 16 , further comprising updating the scam risk score based on results of the comparing, detecting, and evaluating operations performed on the live facial image and the identity document.
  18. 18 . The method of claim 10 , wherein the device fingerprint comprises one or more of: a hardware identifier, an operating system attribute, and a geolocation indicator.

Description

CROSS-REFERENCE TO RELATED APPLICATIONS This application is a continuation-in-part of U.S. patent application Ser. No. 18/903,138, filed Oct. 1, 2024, the entire contents of which are hereby incorporated by reference herein. TECHNICAL FIELD Aspects of the present disclosure generally relate to scam detection, and more particularly, to automated processes for determining the legitimacy of communications and identities. BACKGROUND The growing sophistication of scam tactics has rendered existing anti-fraud tools inadequate for effective scam detection and prevention. Traditional anti-fraud solutions are designed primarily to prevent fraudulent activities that involve a criminal impersonating a consumer in order to open a new account or gain unauthorized access to an existing account. In these scenarios, conventional fraud detection tools look for indicators such as mismatched credentials, failed authentication, unfamiliar devices, and other such anomalies. However, scams have evolved to operate differently. In many situations, a legitimate customer may be the one initiating the transaction, rendering the typical red flags associated with fraud detection obsolete. For example, in a scam scenario, a customer will be using the correct username and password, successfully passing two-factor authentication, and initiating the transaction from their own known device. These factors make it nearly impossible for traditional anti-fraud tools to detect that a scam is taking place because the underlying assumption of such tools is that a bad actor is impersonating the customer. While consumers may theoretically rely on training, experience, and intuition to recognize scams, the rapidly advancing technology available to scammers puts even the most diligent consumers at a significant disadvantage. Scammers now utilize tools such as artificial intelligence to craft highly convincing messages, as well as deepfake technology that allows them to impersonate trusted individuals in real time. This creates a scenario where even the most vigilant consumer may be unable to distinguish between a legitimate communication and a scam, thereby increasing their vulnerability. Additionally, conventional technologies for scam detection typically focus on analyzing attachments by assessing their potential to be malicious rather than examining their actual content. These systems commonly measure entropy, look for known malware signatures, or detect other indicators of harmful behavior within the file. However, the systems do not open or read the attachments to analyze the content embedded within, which leaves a significant gap in detecting scams that may be concealed in the text or images of documents like PDFs, Word files, or images. This situation has led to an urgent need for technological solutions capable of addressing the evolving threat of scams. Current approaches are ill-equipped to combat the level of sophistication employed by modern scammers. Consumers need a solution to aid discernment of whether an inbound communication is fraudulent or legitimate and assist them in identifying when the person the consumer is interacting with is not who they claim to be. Such a solution would empower consumers to avoid inadvertently sharing sensitive information or transferring money to a scammer, addressing a gap in existing fraud prevention tools. SUMMARY Techniques described herein are directed to automated systems and processes for determining the legitimacy of communications and identities. In one embodiment, a system for scam detection and prevention is disclosed, the system including: one or more processors; and a memory storing instructions that, when executed by the one or more processors, cause the system to perform operations comprising: receiving a communication comprising communication information; parsing the received communication information to extract attributes of the communication information; performing a series of deterministic checks on the attributes; performing a series of probabilistic analyses on the attributes, wherein the probabilistic analyses comprise using machine learning models trained on known legitimate communications and known fraudulent communications; aggregating the results of the deterministic checks and probabilistic analyses to generate a scam risk score; generating recommendations specific based on the generated scam risk score, deterministic checks, and probabilistic analyses; and presenting the scam risk score and the recommendations to a user. In one embodiment, the deterministic checks comprise comparing links in the communication against a database of known phishing and known malware sites. In one embodiment, the deterministic checks comprise verifying whether the communication conforms to known communication policies published by a purported sender. In one embodiment, the probabilistic analyses comprise utilizing Natural Language Processing (NLP) to categorize the communication by type. In one emb