US-20260128168-A1 - RELIABILITY ANALYSIS FOR TIME-BASED INFORMATION STREAMS
Abstract
A Predictive Diagnostic Information Capability-Technology (PreDICT™) system ( 100 ) enables users including expert and nonexpert users to provide information regarding a condition of a subject and receive timely and accurate information regarding risk stratification, treatment options and other medical evaluation information. The illustrated system ( 100 ) generally includes a user device ( 102 ) for use by a user assisting a subject ( 104 ), a processing platform ( 108 ), and a network ( 106 ) for connecting the user device ( 102 ) to the processing platform ( 108 ). The system ( 100 ) may also involve an emergency response network ( 130 ) that includes public-safety answering points (PSAPs) ( 132 ). The processing platform ( 108 ) processes the sensor information and other information from the user device ( 102 ), determines risk stratification information as well as medical diagnosis and treatment option information based on machine learning technology, and provides output information to the user device to assist the user in treating the subject ( 104 ).
Inventors
- Nicholas D.P. Drakos
Assignees
- HUNAMIS, LLC
Dates
- Publication Date
- 20260507
- Application Date
- 20241112
Claims (20)
- 1 . A method for verifying the reliability of a time-based information stream, comprising: obtaining, at a computer-based processing system, a time-based information stream; first processing, at said computer-based processing system, said time-based information stream using a motion amplification process to obtain first signature information concerning at least a portion of a spatial domain of said time-based information stream for at least a portion of a time domain of said time-based information stream; second processing, at said computer-based processing system, said first signature information to make an identification of said time-based information stream as being one of potentially reliable and potentially unreliable; and providing, to a user, an output including said identification.
- 2 . The method of claim 1 , wherein said spatial domain includes a first portion corresponding to a subject of interest and a second portion separate from said first portion.
- 3 . The method of claim 2 , wherein said signature information concerns said first portion corresponding to said subject of interest.
- 4 . The method of claim 3 , wherein said subject of interest is a human.
- 5 . The method of claim 4 , wherein said signature information concerns one of physiology information and kinesis information for said human.
- 6 . The method of claim 4 , wherein said signature information concerns a physiological parameter of said human.
- 7 . The method of claim 6 , wherein said physiological parameter relates to a vital sign or oxygen saturation of said human.
- 8 . The method of claim 2 , wherein said signature information concerns said second portion of said spatial domain.
- 9 . The method of claim 8 , wherein said signature information concerns an ambient environment of said time-based information stream.
- 10 . The method of claim 8 , wherein said signature information concerns an ambient lighting of said time-based information stream.
- 11 . The method of claim 9 , wherein said signature information concerns one of an intensity, a frequency, and a color of said ambient lighting.
- 12 . The method of claim 1 , wherein said motion amplification process comprises a moving average differencing process.
- 13 . The method of claim 1 , wherein said second processing comprises a consistency analysis applied across two or more samples distributed over said spatial domain.
- 14 . The method of claim 2 , wherein said second processing comprises a consistency analysis applied across two or more samples distributed over said spatial domain.
- 15 . The method of claim 14 , wherein said consistency analysis is applied with respect to said first portion of said spatial domain.
- 16 . The method of claim 15 , wherein said subject is a human and said consistency analysis concerns one of physiology information and kinesis information for said human.
- 17 . The method of claim 15 , wherein said subject is a human and said consistency analysis concerns a physiological parameter of said human.
- 18 . The method of claim 14 , wherein said consistency analysis is applied with respect to said second portion of said spatial domain.
- 19 . The method of claim 14 , wherein said consistency analysis comprises one of determining whether a calculated value for said samples is substantially equal and determining whether a calculated value for said samples varies in an expected way.
- 20 . The method of claim 2 , wherein said second processing comprises a human indicator analysis applied with respect to said first portion.
Description
CROSS-REFERENCE TO RELATED APPLICATIONS This application is a continuation-in-part of U.S. Non-provisional patent application Ser. No. 18/672,952, entitled “PREDICTIVE DIAGNOSTIC INFORMATION SYSTEM,” filed May 23, 2023, which claims the benefit of U.S. Provisional Patent Application No. 63/468,326, filed May 23, 2023. This application is also a continuation-in-part application of U.S. Non-provisional patent application Ser. No. 18/656,663, entitled “PREDICTIVE DIAGNOSTIC INFORMATION SYSTEM,” filed May 7, 2024, which is a continuation application of U.S. Non-provisional patent application Ser. No. 17/125,720, entitled “PREDICTIVE DIAGNOSTIC INFORMATION SYSTEM,” filed Dec. 17, 2020, now U.S. Pat. No. 11,978,558, issued May 7, 2024. The contents of all of the above-referenced applications (the “Parent Applications”) are incorporated herein by reference as if set forth in full and priority is claimed to the full extent allowable under U.S. law and regulations. FIELD OF THE INVENTION The present invention relates to intelligent processing of time-based information streams such as video information and, in particular, to identifying whether such streams have been altered, are fraudulent, or are otherwise unreliable. BACKGROUND OF THE INVENTION Time-based information streams such as video are analyzed in a variety of contexts. In the Parent Applications, Hunamis, LLC, disclosed a Predictive Diagnostic Information Capability-Technology (PreDICT) system for using sensors for medical evaluation in time constrained critical illness or injury (TCCI) contexts. The sensors employed included cameras for capturing stillframe or video images as well as microphones for audio inputs and other sensor inputs that can conveniently be acquired using a phone or other available equipment, e.g., stationary cameras, drones, or the like. That sensor data could then be analyzed using certain processing techniques and machine learning or artificial intelligence (AI) to yield timely and accurate diagnostic and treatment information in the TCCI context. SUMMARY OF THE INVENTION It has been recognized that certain structure and functionality of the PreDICT system can be used to identify time-based information streams that have been altered, are fraudulent, or are otherwise unreliable. An important class of cases in this regard relate to information streams purporting or appearing to represent a person. While various types of sensor information purporting or appearing to represent a person can be unreliable, video information, including images and sound, has emerged as a particular concern in recent years. So-called deepfake videos have challenged the abilities of observers to distinguish fake content from reality. These deepfake videos are often generated using AI and appear, often to the limits of unaided human perception, to represent videos of actual people, known or unknown. The potential for misuse is clear. Deepfake videos can potentially be used to attribute false content to a person, to generate false information to manipulate opinions for political or business purposes, and to fraudulently undermine human autonomy, trust in common experience, and progress based on an understanding of objective reality. The present invention is directed to a system and associated functionality for identifying and otherwise processing sensor information that has been altered, falsified, or is otherwise unreliable. This is applicable in a variety of contexts including both detection and authentication. Authentication refers to comparison of sensor information under analysis to a benchmark, e.g., a trusted instance of that sensor information. For example, in the case of video authentication, an original video may be obtained from a trusted source. That video may then be analyzed using the PreDICT system to obtain a signature or fiducial information for that video. Any subsequent copies of that video can then be analyzed using the PreDICT system to ensure that the signature is consistent. If it is not, the video under analysis may be identified as potentially unreliable. The goal of detection is to determine, with some level of confidence, whether sensor information under analysis is reliable, regardless of whether sensor information from a reliable source is available for comparison. For example, in the video context, a video for analysis is obtained. The PreDICT system can then be employed to perform a variety of analyses as described below to identify the video under analysis as being potentially reliable or potentially unreliable. This analysis can be implemented without requiring comparison to an existing instance of the video information. For convenience, these applications, which are explicitly not limited to video information, are referred to herein as reliability analysis applications. The PreDICT system provides certain infrastructure and processing that supports such reliability analysis including the deepfake detection capabilities of the p