Search

CN-121987139-A - Progressive lens channel measurement method and system based on multi-mode pupil tracking

CN121987139ACN 121987139 ACN121987139 ACN 121987139ACN-121987139-A

Abstract

The application provides a progressive lens channel measuring method and system based on multi-mode pupil tracking. The method comprises the steps of synchronously collecting an original depth image and an original visible light image of a face of a user, extracting an eye region interested region from the depth image based on a preset depth threshold value, determining pupil coordinates and motion track data by carrying out edge extraction and depth consistency verification processing on the interested region, collecting head posture angles in real time and carrying out coordinate system conversion to determine compensated space pupil coordinates, further calculating optimal progressive channel parameters by combining the space coordinates, age parameters and eye habit parameters, and finally mapping the parameters to a three-dimensional coordinate system taking the center of a nose bridge as an origin and counteracting lens curvature imaging deviation to generate three-dimensional channel marking data and processing instructions. The method provided by the application improves the pupil positioning precision and the individuation degree of the verification.

Inventors

  • PI LIXIN
  • LUO YIMING
  • YANG XIAOLIN
  • YU ZHIHONG

Assignees

  • 深圳市慧明眼镜有限公司

Dates

Publication Date
20260508
Application Date
20260211

Claims (10)

  1. 1. A progressive addition lens channel measurement method based on multi-modal pupil tracking, comprising the steps of: Synchronously acquiring an original depth image and an original visible light image of a face of a user; Screening the original depth image based on a preset depth threshold value, and extracting an eye region interested region from the original depth image; determining pupil coordinates through edge extraction and depth consistency verification processing based on the eye region interest region, the original depth image and the original visible light image; Acquiring the head gesture angle of a user in real time, converting the coordinate system of the pupil coordinate according to the gesture angle, and determining the compensated space pupil coordinate; based on the space pupil coordinates, the user age parameters and the eye habit parameters, determining the length, the position and the inclination angle parameters of the optimal progressive channel through a channel length calculation model; Establishing a three-dimensional coordinate system with the center of the nose bridge of the user as an origin, mapping the parameters of the optimal progressive channel into the three-dimensional coordinate system, calculating and counteracting imaging deviation according to the input lens curvature parameters, and determining final three-dimensional channel marking data; And generating and displaying the channel mark and the processing instruction on a digital model or a glasses frame according to the final three-dimensional channel mark data.
  2. 2. The method of claim 1, wherein a depth threshold screening range is set to 300mm to 900mm when extracting the ocular region of interest.
  3. 3. The method of claim 1, wherein determining pupil coordinates by an edge extraction and depth consistency verification process comprises: Carrying out Gaussian filtering denoising on the original visible light image, and extracting eye edge information by using an edge detection algorithm; identifying pupil candidate regions in the eye edge information through Hough circle transformation; Acquiring depth distribution in the pupil candidate region by using the original depth image; And calculating the depth consistency score in the pupil candidate region by using the depth distribution, comprehensively verifying by combining the circularity of the pupil candidate region, determining a final pupil target region and outputting corresponding pupil coordinates.
  4. 4. The method of claim 3, wherein the depth uniformity score within the pupil candidate region is calculated using the formula: ; Wherein, the For a depth consistency score within the pupil candidate region, As the standard deviation of depth within the pupil candidate region, Is a reference constant; the circularity of the pupil candidate region is calculated by adopting the following formula: ; Wherein, the For the circularity of the pupil candidate region, For the area of the pupil candidate region, Perimeter of pupil candidate region; the comprehensive verification result is characterized based on a total verification score, and the total verification score is calculated by adopting the following formula: ; Wherein, the For the total verification score to be a fraction, , 。
  5. 5. A method according to claim 3, characterized in that the state vector of the kalman filter is defined in the following way: ; Wherein, the As a state vector of the state vector, For the pupil coordinates, Is the rate of change of coordinates; the Kalman filtering state transition matrix comprises a time step, and gain updating is carried out through the observation matrix and the real-time detection value so as to continuously track the pupil center position.
  6. 6. The method of claim 1, wherein the pupil coordinates are transformed in a coordinate system to determine compensated spatial pupil coordinates using the formula: ; ; Wherein the attitude angle is defined by a yaw angle And pitch angle The characteristics of the product are characterized in that, For the amount of horizontal compensation, For the amount of vertical compensation, Is the posterior ocular distance.
  7. 7. The method of claim 1, wherein the channel length calculation model is expressed by the following formula: ; Wherein, the For the length of the optimal progressive channel of the model output, For the base channel length calculated based on the pupil trajectories, For the purpose of age compensation, In order to compensate for the eye habit, The personalized fine tuning amount is used for the user; Wherein the age compensation is set according to the age bracket of the user, the age compensation of the user between 51 and 60 years old is set to be +0.5mm, the age compensation of the user between 60 and 70 years old is set to be +1mm, and the age compensation of the user above 70 years old is set to be +1.5mm; The eye habit compensation is evaluated according to a questionnaire score, the eye habit compensation of the short-distance eye user is set to be-1 mm, and the eye habit compensation of the long-distance eye user is set to be +1mm.
  8. 8. The method of claim 1, wherein said calculating and counteracting imaging deviations from the inputted lens curvature parameters to determine final three-dimensional channel marking data comprises: Calculating imaging deviation of the spherical or aspherical lens according to the off-axis distance, the curvature radius of the lens and the thickness of the lens, and determining the final adjusted channel position based on the imaging deviation to be used as the final three-dimensional channel marking data; Based on the imaging deviation, the final adjusted channel position is calculated using the following formula: ; Wherein, the Is the position of the original channel and is the position of the original channel, In order to finally adjust the position of the channel, Is the spherical deviation of the spherical surface of the lens, In order for the thickness to be a deviation, In order for the prism to be offset, Representing the overall imaging bias.
  9. 9. The method of claim 8, wherein the method further comprises: And analyzing historical verification data and user feedback by using a machine learning algorithm, continuously optimizing a compensation model in the parameter calculation step, and establishing an individualized user portrait.
  10. 10. A progressive addition lens path measurement system based on multi-modal pupil tracking, comprising: the three-dimensional depth camera module is used for synchronously collecting an original depth image and an original visible light image of the face of the user; the eye region positioning module is used for screening the original depth image based on a preset depth threshold value and extracting an eye region interested region from the original depth image; The pupil tracking module is used for inputting the region of interest of the eye region, the original depth image and the original visible light image, and outputting pupil coordinates and movement track data of the pupil coordinates through edge extraction and depth consistency verification processing; the head posture compensation module is used for collecting the posture angle of the head of the user in real time, converting the coordinate system of the pupil coordinate according to the posture angle and outputting the compensated space pupil coordinate; The progressive channel calculation module is used for inputting the space pupil coordinates, the user age parameters and the eye habit parameters, and calculating and outputting the length, the position and the inclination angle parameters of the optimal progressive channel through the channel length calculation model; The three-dimensional space mapping module is used for establishing a three-dimensional coordinate system taking the center of the nose bridge of the user as an origin, mapping the parameters of the optimal progressive channel into the three-dimensional coordinate system, calculating and counteracting imaging deviation according to the input lens curvature parameters, and outputting final three-dimensional channel marking data; and the visual marking module is used for receiving the final three-dimensional channel marking data, generating and displaying the channel marking and the processing instruction on the digital model or the glasses frame.

Description

Progressive lens channel measurement method and system based on multi-mode pupil tracking Technical Field The application relates to the technical field of optical inspection and matching, in particular to a progressive lens channel measuring method and system based on multi-mode pupil tracking. Background Along with the popularization of multi-focus optical technology, the progressive lens can meet the visual demands of different distances of far, middle and near at the same time, and is widely applied in the field of vision correction. In the course of the fitting of progressive lenses, the precise identification of the pupil position and the scientific setting of the channel parameters are key factors ensuring the wearing comfort and the visual quality. Currently, existing progressive lens prescription relies primarily on subjective experience of optometrists for pupil height measurement and channel selection. In practical operation, common measurement means include a BOX marking method, a pupil distance ruler measurement method and other traditional physical measurement methods. In the aspect of automatic recognition technology, a pupil detection method based on a single visual image is mostly adopted in the prior art to acquire eye data. The prior proposal has a plurality of technical defects which are difficult to overcome in practical application and can not meet the practical requirements. Therefore, the present application provides a progressive lens channel measurement method and system based on multi-modal pupil tracking, so as to solve one of the above technical problems. Disclosure of Invention The present application is directed to a progressive lens channel measurement method and system based on multi-modal pupil tracking, which can solve at least one of the above-mentioned technical problems. The specific scheme is as follows: according to a first aspect of the present application, there is provided a progressive addition lens channel measurement method based on multi-modal pupil tracking, comprising: The method comprises the steps of synchronously collecting an original depth image and an original visible light image of a face of a user, screening the original depth image based on a preset depth threshold value, extracting an eye region interested region from the original depth image, determining pupil coordinates based on the eye region interested region, the original depth image and the original visible light image through edge extraction and depth consistency verification processing, collecting a head gesture angle of the user in real time, converting the pupil coordinates according to the gesture angle to determine a compensated space pupil coordinate, determining the length, the position and the inclination angle parameters of an optimal progressive channel through a channel length calculation model based on the space pupil coordinate, a user age parameter and an eye habit parameter, establishing a three-dimensional coordinate system taking the center of a nose bridge of the user as an origin, mapping the parameters of the optimal progressive channel into the three-dimensional coordinate system, calculating and counteracting imaging deviation according to the input lens curvature parameters, determining final three-dimensional channel marking data, generating and displaying channel marks and processing instructions on a digital model or a frame according to the final three-dimensional channel marking data. In one embodiment, the depth threshold screening range is set to 300mm to 900mm when extracting the region of interest of the eye region. In one embodiment, the pupil coordinate is determined through edge extraction and depth consistency verification processing, and the method comprises the steps of carrying out Gaussian filtering denoising on an original visible light image, extracting eye edge information through an edge detection algorithm, identifying a pupil candidate region in the eye edge information through Hough circle transformation, obtaining depth distribution in the pupil candidate region through the original depth image, calculating depth consistency scores in the pupil candidate region through the depth distribution, comprehensively verifying the circularity of the pupil candidate region, determining a final pupil target region and outputting corresponding pupil coordinates. In one embodiment, the depth uniformity score in the pupil candidate region is calculated using the following formula: wherein, the method comprises the steps of, For a depth consistency score within the pupil candidate region,As the standard deviation of depth within the pupil candidate region,Is a reference constant; the circularity of the pupil candidate region is calculated by adopting the following formula: wherein, the method comprises the steps of, For the circularity of the pupil candidate region,For the area of the pupil candidate region,The method comprises the steps of determining the t