Search

US-20260127614-A1 - SYSTEM AND METHOD FOR AUTOMATED SCAM DETECTION

US20260127614A1US 20260127614 A1US20260127614 A1US 20260127614A1US-20260127614-A1

Abstract

A system for third party initiated identity verification including: a communication interface configured to receive a transaction request associated with a sender and a counterparty; one or more processors; and a memory storing instructions that when executed by the one or more processors cause the system to: identify counterparty information associated with the transaction request; initiate a verification process with a scam monitoring system using the counterparty information; transmit a verification request to the counterparty; receive counterparty payment purpose information; perform identity verification checks on the counterparty; generate a verification result based on the identity verification checks and the counterparty payment purpose information; and provide the verification result to a third party for processing of the transaction.

Inventors

  • Alphonse Pascual
  • John G. Evans

Assignees

  • Scamnetic Inc.

Dates

Publication Date
20260507
Application Date
20251219

Claims (20)

  1. 1 . A system for third party initiated identity verification comprising: a communication interface configured to receive a transaction request associated with a sender and a counterparty; one or more processors; and a memory storing instructions that when executed by the one or more processors cause the system to: identify counterparty information associated with the transaction request; initiate a verification process with a scam monitoring system using the counterparty information; transmit a verification request to the counterparty; receive counterparty payment purpose information; perform identity verification checks on the counterparty; generate a verification result based on the identity verification checks and the counterparty payment purpose information; and provide the verification result to a third party for processing of the transaction.
  2. 2 . The system of claim 1 , wherein the instructions further cause the system to obtain approval from the sender prior to transmitting the verification request to the counterparty.
  3. 3 . The system of claim 1 , wherein the identity verification checks comprise evaluating one or more attributes comprising device signals, email age, phone number type, location consistency, velocity of prior verification attempts, association with malicious activity, and metadata consistency.
  4. 4 . The system of claim 1 , wherein the verification result comprises one or more classifications comprising incomplete challenge, defect condition, hard fail condition, pass condition, and error condition.
  5. 5 . The system of claim 1 , wherein the transaction request comprises an inbound check deposit and the counterparty comprises a purported maker of the check.
  6. 6 . The system of claim 1 , wherein the scam monitoring system is configured to evaluate whether the counterparty is associated with mule activity based on one or more indicators comprising forwarding behavior, inconsistent responses, unverified identifiers, disposable email domains, VoIP phone lines, SIM swapped devices, and prior mule history.
  7. 7 . The system of claim 1 , further comprising instructions that when executed cause the system to: compare sender supplied transaction information with counterparty supplied transaction information; and determine a narrative alignment score.
  8. 8 . The system of claim 7 , wherein the narrative alignment score is based on normalization of sender responses and counterparty responses into predefined transaction categories.
  9. 9 . The system of claim 1 , wherein the system is further configured to: evaluate the sender for potential account compromise based on one or more of device behavior, communication patterns, and identity attributes.
  10. 10 . The system of claim 1 , wherein the system is configured to: generate a mule risk assessment based on one or more of the verification result, the narrative alignment score, the counterparty evaluation, and the sender evaluation.
  11. 11 . A method for third party initiated identity verification comprising: receiving a transaction request associated with a sender and a counterparty; identifying counterparty information associated with the transaction request; initiating a verification process with a scam monitoring system using the counterparty information; transmitting a verification request to the counterparty; receiving counterparty payment purpose information in response to the verification request; performing identity verification checks on the counterparty; generating a verification result based on the identity verification checks and the counterparty payment purpose information; and providing the verification result to a third party.
  12. 12 . The method of claim 11 , further comprising: obtaining approval from the sender prior to transmitting the verification request to the counterparty.
  13. 13 . The method of claim 11 , wherein performing identity verification checks comprises: evaluating one or more attributes comprising device signals, email age, phone number type, location consistency, velocity of prior verification attempts, association with malicious activity, and metadata consistency.
  14. 14 . The method of claim 11 , wherein the verification result comprises: one or more classifications comprising incomplete challenge, defect condition, hard fail condition, pass condition, and error condition.
  15. 15 . The method of claim 11 , wherein the transaction request comprises an inbound check deposit and the counterparty comprises a purported maker of the check.
  16. 16 . The method of claim 11 , further comprising: evaluating whether the counterparty is associated with mule activity based on one or more indicators comprising forwarding behavior, inconsistent responses, unverified identifiers, disposable email domains, VoIP numbers, SIM swapped devices, or prior mule history.
  17. 17 . The method of claim 11 , further comprising: comparing sender supplied transaction information with counterparty supplied transaction information; and determining a narrative alignment score.
  18. 18 . The method of claim 17 , wherein determining the narrative alignment score comprises: normalizing sender responses and counterparty responses into predefined transaction categories.
  19. 19 . The method of claim 11 , further comprising: evaluating the sender for potential account compromise based on one or more of device behavior, communication patterns, and identity attributes.
  20. 20 . The method of claim 11 , further comprising: generating a mule risk assessment based on one or more of the verification result, the narrative alignment score, the counterparty evaluation, and the sender evaluation.

Description

CROSS-REFERENCE TO RELATED APPLICATIONS This application is a continuation-in-part of U.S. patent application Ser. No. 18/903,138, filed Oct. 1, 2024, the entire contents of which are hereby incorporated by reference herein. TECHNICAL FIELD Aspects of the present disclosure generally relate to scam detection, and more particularly, to automated processes for determining the legitimacy of communications and identities. BACKGROUND The growing sophistication of scam tactics has rendered existing anti-fraud tools inadequate for effective scam detection and prevention. Traditional anti-fraud solutions are designed primarily to prevent fraudulent activities that involve a criminal impersonating a consumer in order to open a new account or gain unauthorized access to an existing account. In these scenarios, conventional fraud detection tools look for indicators such as mismatched credentials, failed authentication, unfamiliar devices, and other such anomalies. However, scams have evolved to operate differently. In many situations, a legitimate customer may be the one initiating the transaction, rendering the typical red flags associated with fraud detection obsolete. For example, in a scam scenario, a customer will be using the correct username and password, successfully passing two-factor authentication, and initiating the transaction from their own known device. These factors make it nearly impossible for traditional anti-fraud tools to detect that a scam is taking place because the underlying assumption of such tools is that a bad actor is impersonating the customer. While consumers may theoretically rely on training, experience, and intuition to recognize scams, the rapidly advancing technology available to scammers puts even the most diligent consumers at a significant disadvantage. Scammers now utilize tools such as artificial intelligence to craft highly convincing messages, as well as deepfake technology that allows them to impersonate trusted individuals in real time. This creates a scenario where even the most vigilant consumer may be unable to distinguish between a legitimate communication and a scam, thereby increasing their vulnerability. Additionally, conventional technologies for scam detection typically focus on analyzing attachments by assessing their potential to be malicious rather than examining their actual content. These systems commonly measure entropy, look for known malware signatures, or detect other indicators of harmful behavior within the file. However, the systems do not open or read the attachments to analyze the content embedded within, which leaves a significant gap in detecting scams that may be concealed in the text or images of documents like PDFs, Word files, or images. This situation has led to an urgent need for technological solutions capable of addressing the evolving threat of scams. Current approaches are ill-equipped to combat the level of sophistication employed by modern scammers. Consumers need a solution to aid discernment of whether an inbound communication is fraudulent or legitimate and assist them in identifying when the person the consumer is interacting with is not who they claim to be. Such a solution would empower consumers to avoid inadvertently sharing sensitive information or transferring money to a scammer, addressing a gap in existing fraud prevention tools. SUMMARY Techniques described herein are directed to automated systems and processes for determining the legitimacy of communications and identities. In one embodiment, a system for scam detection and prevention is disclosed, the system including: one or more processors; and a memory storing instructions that, when executed by the one or more processors, cause the system to perform operations comprising: receiving a communication comprising communication information; parsing the received communication information to extract attributes of the communication information; performing a series of deterministic checks on the attributes; performing a series of probabilistic analyses on the attributes, wherein the probabilistic analyses comprise using machine learning models trained on known legitimate communications and known fraudulent communications; aggregating the results of the deterministic checks and probabilistic analyses to generate a scam risk score; generating recommendations specific based on the generated scam risk score, deterministic checks, and probabilistic analyses; and presenting the scam risk score and the recommendations to a user. In one embodiment, the deterministic checks comprise comparing links in the communication against a database of known phishing and known malware sites. In one embodiment, the deterministic checks comprise verifying whether the communication conforms to known communication policies published by a purported sender. In one embodiment, the probabilistic analyses comprise utilizing Natural Language Processing (NLP) to categorize the communication by type. In one emb