Search

CN-121845513-B - Self-adaptive visual stimulus presentation and eye movement signal quantitative analysis method and system based on mobile terminal

CN121845513BCN 121845513 BCN121845513 BCN 121845513BCN-121845513-B

Abstract

The invention relates to the technical field of man-machine interaction and computer vision, and provides a method and a system for self-adaptive visual stimulus presentation and eye movement signal quantitative analysis based on a mobile terminal. The method has the advantages that the physical distance between the face of the user and the screen is calculated in real time, the rendering parameters of the visual target are dynamically adjusted, the vertical synchronizing signal of the display subsystem is monitored to determine the target presentation time stamp, the inertial measurement unit data is fused for motion compensation, the Kalman filtering processing is combined to reconstruct the eye movement track, the technical problems that the visual angle of the visual stimulus is not constant, the time synchronizing precision is insufficient and the signal-to-noise ratio of the eye movement signal is low in the uncontrolled environment of the mobile terminal are solved, the high-precision visual stimulus presentation and the eye movement characteristic parameter quantitative extraction can be realized in the non-laboratory environment, and the accuracy, the reliability and the applicable scene universality of eye movement evaluation are improved.

Inventors

  • ZHANG QIHAN
  • BAI XUEJUN
  • YANG SHAOFENG

Assignees

  • 天津师范大学

Dates

Publication Date
20260512
Application Date
20260319

Claims (10)

  1. 1. An adaptive visual stimulus presentation and eye movement signal quantitative analysis method based on a mobile terminal, wherein the mobile terminal comprises a camera, a display subsystem and an inertial measurement unit, and is characterized by comprising the following steps: extracting a user iris region according to the video stream acquired by the camera, and calculating the physical distance between the face of the user and the screen according to the iris pixel diameter; according to the physical distance and the preset visual angle parameter, determining a rendering parameter of a visual target on a screen, so that the opening angle of the visual target to human eyes is kept constant; Monitoring a vertical synchronizing signal of the display subsystem, and determining a target presentation time stamp according to the vertical synchronizing signal; extracting eyeball position coordinates from the video stream, performing motion compensation on the eyeball position coordinates according to the data acquired by the inertia measurement unit, and performing Kalman filtering processing on the compensated data to obtain an eye movement track; And identifying an eye jump event from the eye movement track, calculating the latency of the eye jump event relative to the target presentation time stamp and the direction deviation angle relative to the visual target, and outputting the latency and the direction deviation angle as a quantitative analysis result.
  2. 2. The mobile terminal-based adaptive visual stimulus presentation and eye movement signal quantization method of claim 1, wherein calculating the physical distance between the user's face and the screen from the iris pixel diameter comprises: Extracting iris boundary key points from the video stream by using a face detection model, and calculating the pixel diameter of the iris on an image sensor; And calculating the physical distance according to the iris pixel diameter, the preset physical iris diameter and the equivalent focal length of the camera.
  3. 3. The method for quantitatively analyzing the visual stimulus presentation and the eye movement signals based on the mobile terminal according to claim 2, wherein the determining the rendering parameters of the visual target on the screen according to the physical distance and the preset visual angle parameters comprises: calculating the physical size of the visual target according to the visual angle parameter and the physical distance, and converting the physical size into a pixel size according to the pixel density of a screen; Calculating the pixel distance of the visual target deviating from the center of the screen according to the visual angle parameter, the physical distance and the screen pixel density; and determining the rendering coordinates and the rendering size of the visual target on the screen according to the pixel size and the pixel distance.
  4. 4. The method of mobile terminal-based adaptive visual stimulus presentation and eye movement signal quantization analysis of claim 3, wherein the listening display subsystem's vertical synchronization signal, from which a target presentation timestamp is determined, comprises: Registering a vertical synchronization callback through a frame synchronization interface provided by an operating system of the mobile terminal, wherein the frame synchronization interface comprises Android Choreographer or iOS CADisplayLink; Writing rendering data of the visual target into a frame buffer when the vertical synchronization signal is received; And recording the system time stamp of the triggering moment of the vertical synchronizing signal, and correcting and obtaining the target presentation time stamp according to the number of frames to be displayed in the frame buffer area, the screen refreshing period and the physical response delay of the screen panel.
  5. 5. The method of claim 4, wherein the performing motion compensation on the eyeball position coordinates according to the data collected by the inertial measurement unit comprises: Acquiring angular velocity data acquired by the inertial measurement unit, and integrating the angular velocity data in a time window to obtain a head rotation increment matrix; fusing the head rotation increment matrix with a head posture matrix obtained by visual detection to obtain a compensated head posture matrix; And according to the compensated head posture matrix, performing inverse rotation transformation on the eyeball position coordinate, and eliminating displacement components caused by head rotation to obtain the relative position coordinate of the eyeball in a head coordinate system.
  6. 6. The method for quantitatively analyzing the visual stimulus presentation and the eye movement signals based on the mobile terminal according to claim 5, wherein the performing kalman filter processing on the compensated data to obtain the eye movement track comprises: establishing a state vector containing eyeball position coordinates and eyeball speed, and predicting the state vector at the current moment according to a constant speed model; taking the compensated data as an observed value, calculating a Kalman gain, and fusing a predicted state vector and the observed value according to the Kalman gain to obtain a filtered state vector; And extracting eyeball position coordinates in the filtered state vector to form the eye movement track.
  7. 7. The method of mobile terminal-based adaptive visual stimulus presentation and eye movement signal quantification analysis of claim 6, wherein the identifying an eye jump event from the eye movement trajectory comprises: Setting a time window and a dispersion threshold, sliding the time window on the eye movement track, and calculating the sum of maximum spans of the gaze point coordinates in the window; when the sum of the maximum spans exceeds the dispersion threshold value, marking the corresponding time period of the time window as an eye jump state, and recording the starting moment of the eye jump; and when the sum of the maximum spans does not exceed the dispersion threshold value, marking the corresponding time period of the time window as a gazing state.
  8. 8. The method of claim 7, wherein calculating the latency of the eye jump event relative to the target presentation time stamp and the directional deviation angle relative to the visual target comprises: calculating a difference between the eye jump starting time and the target presentation time stamp as the latency; And constructing an eye jump vector and a target vector, calculating an included angle of the two vectors as the direction deviation angle, and determining whether the eye jump direction is correct according to the relation between the included angle and a preset direction judgment threshold value.
  9. 9. The mobile terminal-based adaptive visual stimulus presentation and eye movement signal quantification analysis method of claim 1, wherein prior to the extracting the user iris region from the camera-captured video stream, the method further comprises: And when the physical distance is not in the effective distance range, generating prompt information to guide a user to adjust the holding position of the mobile terminal.
  10. 10. An adaptive visual stimulus presentation and eye movement signal quantitative analysis system based on a mobile terminal, the mobile terminal comprising a camera, a display subsystem and an inertial measurement unit, comprising: The distance calculation module is used for extracting a user iris region according to the video stream acquired by the camera and calculating the physical distance between the face of the user and the screen according to the iris pixel diameter; The self-adaptive rendering module is used for determining the rendering parameters of the visual target on the screen according to the physical distance and the preset visual angle parameters, so that the opening angle of the visual target to human eyes is kept constant; The time synchronization module is used for monitoring the vertical synchronization signal of the display subsystem and determining a target presentation time stamp according to the vertical synchronization signal; The signal processing module is used for extracting eyeball position coordinates from the video stream, performing motion compensation on the eyeball position coordinates according to the data acquired by the inertia measurement unit, and performing Kalman filtering processing on the compensated data to obtain an eye movement track; and the quantitative analysis module is used for identifying the eye jump event from the eye movement track, calculating the latency of the eye jump event relative to the target presentation time stamp and the direction deviation angle relative to the visual target, and outputting the result as a quantitative analysis result.

Description

Self-adaptive visual stimulus presentation and eye movement signal quantitative analysis method and system based on mobile terminal Technical Field The application relates to the technical field of man-machine interaction and computer vision, in particular to a method and a system for self-adaptive visual stimulus presentation and eye movement signal quantitative analysis based on a mobile terminal. Background Eyeball movement parameters (such as glance latency, fixation stability, error rate, etc.) are important indicators for assessing human visual attention and performing functions. Traditional eye movement measurements rely on infrared eye movement instruments in laboratory environments, requiring the subject to use chin rest to hold the head in place to keep the distance of the eyes from the screen constant, thereby accurately controlling the visual angle of the visual stimulus. However, with the development of ambulatory medical treatment, large-scale and remote eye movement evaluation by a portable device such as a smart phone has been a trend. But the prior art faces three major core challenges at the mobile end: (1) The physical environment is uncontrollable, namely, the distance and the angle of the handheld device of the user are changed in real time, so that the projection angle of the stimulus with fixed pixel size on the screen on the retina is not constant, and the standardization of psychophysics experiments is destroyed. (2) The time synchronization precision is low, the time delay of tens of milliseconds exists from the sending of a display instruction to the actual lighting of a screen, and the acquisition frame of a camera is not synchronous with the refreshing frame of the screen, so that the accuracy of latency measurement is seriously affected. (3) The data signal-to-noise ratio is low, the sampling rate of the front-end camera at the mobile end is usually only 30Hz-60Hz, and rapid eye jump events are difficult to capture along with strong hand tremors and illumination noise. In view of the above, there is a need in the art for improvements. Disclosure of Invention The application aims to provide a mobile terminal-based self-adaptive visual stimulus presentation and eye movement signal quantitative analysis method and system, which have the advantages of being capable of realizing high-precision visual stimulus presentation and eye movement characteristic parameter quantitative extraction in a non-laboratory environment, and improving the accuracy, reliability and application scene universality of eye movement evaluation. In a first aspect, the present application provides a method for quantitatively analyzing an eye movement signal and an adaptive visual stimulus presentation based on a mobile terminal, wherein the mobile terminal comprises a camera, a display subsystem and an inertial measurement unit, and the method comprises the following steps: extracting a user iris region according to the video stream acquired by the camera, and calculating the physical distance between the face of the user and the screen according to the iris pixel diameter; according to the physical distance and the preset visual angle parameter, determining a rendering parameter of a visual target on a screen, so that the opening angle of the visual target to human eyes is kept constant; Monitoring a vertical synchronizing signal of the display subsystem, and determining a target presentation time stamp according to the vertical synchronizing signal; extracting eyeball position coordinates from the video stream, performing motion compensation on the eyeball position coordinates according to the data acquired by the inertia measurement unit, and performing Kalman filtering processing on the compensated data to obtain an eye movement track; And identifying an eye jump event from the eye movement track, calculating the latency of the eye jump event relative to the target presentation time stamp and the direction deviation angle relative to the visual target, and outputting the latency and the direction deviation angle as a quantitative analysis result. Further, calculating a physical distance between the face of the user and the screen from the iris pixel diameter includes: Extracting iris boundary key points from the video stream by using a face detection model, and calculating the pixel diameter of the iris on an image sensor; And calculating the physical distance according to the iris pixel diameter, the preset physical iris diameter and the equivalent focal length of the camera. Further, the determining the rendering parameters of the visual target on the screen according to the physical distance and the preset visual angle parameters includes: calculating the physical size of the visual target according to the visual angle parameter and the physical distance, and converting the physical size into a pixel size according to the pixel density of a screen; Calculating the pixel distance of the visual target de