Search

CN-122023454-A - Eye movement tracking image processing method, system and application in multi-spot scene

CN122023454ACN 122023454 ACN122023454 ACN 122023454ACN-122023454-A

Abstract

The invention relates to a method, a system and application for processing eye movement tracking images in a multi-spot scene, wherein the method comprises the steps of obtaining an eye video stream to be processed containing pupils, determining a ROI (region of interest) associated with the pupils for any image frame, identifying spots associated with the ROI, classifying the spots based on the relative position relation between the spots and the pupils, performing corresponding image restoration processing on the classified spots, obtaining the accurate position and morphological parameters of the pupils based on the restored images and updating the accurate position and morphological parameters to subsequent image frames, and the system comprises a preprocessing module, a spot classification module, a spot restoration module and a pupil accurate positioning module. According to the invention, high-precision pupil center positioning can be realized even under the interference of multiple light spots, the fitted circular arc boundary can always fit the real pupil outline, the gray distribution characteristics of an original image are reserved to the greatest extent, and clear and complete edge information is provided for subsequent binarization processing and high-precision fitting.

Inventors

  • ZHENG YAYU
  • LI JING
  • LIN JUNHONG

Assignees

  • 浙江工业大学

Dates

Publication Date
20260512
Application Date
20251223

Claims (10)

  1. 1. A method for processing eye movement tracking images in a multi-spot scene is characterized by obtaining an eye video stream to be processed containing pupils, determining a ROI associated with the pupils for any image frame, identifying spots associated with the ROI, classifying the spots based on the relative position relation between the spots and the pupils, performing corresponding image restoration processing on the classified spots, and obtaining the accurate position and morphological parameters of the pupils based on the restored images and updating the accurate position and morphological parameters to subsequent image frames.
  2. 2. The method for processing eye tracking images in a multi-spot scene as recited in claim 1, wherein if the current image frame is the first frame, a center point of a darkest region in the image is used as a rough estimation center of a pupil, and a subsequent step is performed based on the rough estimation center to obtain a pupil accurate position of the first frame; If the current image frame is the second frame, taking the accurate pupil position acquired by the first frame as a rough estimation center of the pupil of the current frame; if the current image frame is a third frame and other frames after the third frame, predicting a rough estimation center of the pupil of the current frame by using the pupil accurate positions of the first two frames based on a motion prediction model with neighborhood definition; And extracting an ROI image based on the rough estimation center of the pupil and a preset rough estimation radius.
  3. 3. The method for processing eye tracking images in a multi-spot scene as set forth in claim 2, wherein the ROI image is processed, spots in the ROI image are obtained by connected analysis, position parameters of each spot are obtained, and the spots are classified into intra-pupil spots, iris area spots and pupil boundary spots according to a relationship between the position parameters of the spots and rough estimated centers and rough estimated radii of the pupil.
  4. 4. The method for processing eye tracking images in a multi-spot scene as recited in claim 3, wherein the maximum value of all the spot radii is used as a uniform spot radius, and the position parameters of the spot are the spot center and the uniform spot radius.
  5. 5. The method for processing eye tracking images in a multi-spot scene as recited in claim 3, wherein the gray scale filling process is performed on the spot in the pupil and the spot in the iris, and the radius of the filled area is greater than or equal to the radius of the spot in the pupil and the spot in the iris.
  6. 6. The method for processing eye tracking images in a multi-spot scene as recited in claim 3, wherein the pupil boundary spots classified based on the ROI images are subjected to two-way gray scale verification; A pseudo-ray is sent from the rough estimation center of the pupil to the spot center of the pupil boundary spot, and sequentially intersects with the edge of the pupil boundary spot at a near end point and a far end point; if the gray level of the near side sampling sequence is reduced and is in a low gray level interval and the far side gray level is in a high gray level interval, determining the near side sampling sequence as pupil boundary light spots, otherwise, reclassifying the light spots according to a bidirectional gray level verification result and executing processing.
  7. 7. The method for processing the eye tracking image in the multi-spot scene according to claim 6, wherein the method is characterized in that pupil boundary spots subjected to bidirectional gray verification are scanned and detected along the edges of the spots, two boundary points of the edges of the spots and the edges of the pupils are positioned by utilizing gray value differences, arc boundaries connecting the two boundary points are obtained by rough estimation radius fitting of the two boundary points and the pupils, gray filling processing is carried out on the overlapping area of the pupil boundary spots and the pupils compared with the spots in the pupils based on the arc boundaries, and gray filling processing is carried out on the rest parts compared with the spots in the iris area.
  8. 8. The method for processing the eye tracking image in the multi-spot scene according to claim 1, wherein the method is characterized in that the restored image is subjected to binarization processing, an edge contour point set is extracted, ellipse fitting is carried out on the edge contour point set by adopting a least square method, accurate pupil center coordinates of a current frame and major axis radius and minor axis radius of the ellipse are obtained, pupil rough estimation parameters are updated based on fitting results of the current frame, pupil centers detected by the current frame are recorded, and the minor axis radius of the ellipse fitted by the current frame is used as pupil rough estimation radius of a next frame.
  9. 9. An eye tracking image processing system in a multi-spot scene is characterized in that the system comprises: the preprocessing module is used for acquiring an eye video stream to be processed and outputting an ROI (region of interest) containing pupils and initial state estimation of the pupils; the light spot classification module is used for identifying light spots in the ROI and classifying according to the relative positions of the light spots and the pupils; The light spot repairing module is used for executing different image repairing strategies aiming at different types of light spots and reconstructing the occluded pupil edge; and the pupil accurate positioning module is used for carrying out contour extraction and fitting on the image processed by the light spot restoration module and outputting accurate pupil center coordinates and morphological parameters.
  10. 10. An application of the eye tracking image processing method in the multi-spot scene according to any one of claims 1 to 9 is characterized in that the method is applied to an eye tracking system.

Description

Eye movement tracking image processing method, system and application in multi-spot scene Technical Field The invention relates to the technical field of electric digital data processing, in particular to a method, a system and an application for processing eye tracking images in a multi-spot scene in the field of computer vision. Background Eye tracking technology is one of the core technologies of man-machine interaction technology and virtual/augmented reality technology. In eye video stream based gaze tracking systems, accurate extraction of pupil center and corneal reflected spots (purkinje spots) is a core condition for achieving gaze estimation accuracy. However, in practical application, purkinje spots generated by an infrared light source, light spots caused by an ambient light source and the like all affect the integrity of an image, and even part of edges of pupils are blocked, so that the pupil outline is incomplete, which has a great influence on high-precision pupil fitting. In the prior art, a method of deleting and fitting is mostly adopted for pupil edge deletion caused by light spots, namely, edge points affected by the light spots are filtered directly, and then the rest edge points are utilized to continue pupil contour fitting. For example, chinese patent publication No. CN101788848a discloses an eye feature parameter detection method for a gaze tracking system, and proposes a pupil edge filtering algorithm based on radial distance, in which purkinje is used as the center of a circle, rays are emitted in multiple directions, whether the edge point is available is judged according to the distance of the edge point through which the rays pass, the edge point affected by purkinje is screened out, and the edge of the exit pupil is fitted through the remaining edge points. However, the method is more suitable for processing the single purkinje scene, and in complex scenes such as the purkinje, when the light spots are more and concentrated at the edge of the pupil, the available boundary points may not be enough to accurately fit the pupil outline. The method for directly removing the pupil edge points of the light spot influence area cannot recover the blocked pupil edge information. The method and the device for determining the pupil position are disclosed in Chinese patent publication No. CN108280403A, the interference of light spots on pupil extraction is processed by a multi-layer edge filtering technology, parameters of the pupil and the light spots are obtained through an image segmentation algorithm, an initial edge area is obtained, the constraint of a circular ring area and the position filtering of the light spots are used, effective edges near the pupil are reserved, namely fragments belonging to the same circle are screened out according to the geometric relationship of the circle, and finally internal noise and smooth outer contours are removed through convex hull and morphological operation, so that a final pupil edge area is obtained. However, the method adopts a mode of deleting the problem edge, so that the extracted pupil edge information is not complete, but the pupil outline blocked by the light spots is not completed, and the reconstruction capability of the true outline is limited when the pupil edge is seriously lost, so that the final ellipse fitting precision is error. The Chinese patent with the application number of CN115423870A discloses a pupil center positioning method and a pupil center positioning device, which solve the pupil positioning problem in a mode of combining pupil edge detection and binarization processing, firstly perform pupil region positioning on a target image, then perform edge detection and binarization operation respectively, verify and filter the result of edge detection based on the binarized image, delete edge points with inconsistent gray values, finally perform pupil line segment fitting, perform straight line filtering and ellipse fitting, and determine the final pupil center point. However, this method mainly reduces the interference of noise signals by deleting the edge points which are not in accordance with the conditions, and in the case of multi-spot blocking of pupils, the number of available edge points may be reduced, so that the accurate pupil center cannot be extracted. The method is easy to cause the reduction of fitting accuracy due to the loss of a large amount of information under the condition of multiple light sources. Therefore, a method is needed to effectively eliminate the influence of light spots on the pupil edge on the premise of not increasing the complexity of hardware, and meanwhile, the integrity of pupil edge information is maintained so as to improve the accuracy of pupil ellipse fitting. Disclosure of Invention The invention solves the problems existing in the prior art and provides a method, a system and an application for processing eye tracking images in a multi-spot scene. The technical scheme includ