CN-122021316-A - AI digital lens generation method and system based on 3D eyeball digital model
Abstract
The application provides an AI digital lens generation method and system based on a 3D eyeball digital model. The method comprises the steps of obtaining original parameter data of an eyeball to be detected by utilizing multi-mode scanning, constructing a 3D eyeball digital model by utilizing the original parameter data, inputting the 3D eyeball digital model into a pre-trained joint network, extracting eyeball refractive parameters and 8-order Zernike high-order aberration coefficients, inputting the eyeball refractive parameters and the high-order aberration coefficients as generating conditions, generating a collaborative design model of an countermeasure network GAN and a multi-objective non-ordered genetic algorithm NSGA-II, determining an optimal lens free-form surface height map, controlling digital lens production by utilizing computer-aided manufacturing CAM based on the optimal lens free-form surface height map, and carrying out virtual wearing verification by combining an augmented reality AR technology. The application fundamentally solves the technical problems that the traditional refraction correction precision is insufficient and the high-order aberration cannot be effectively compensated.
Inventors
- PI LIXIN
- LUO YIMING
- YANG XIAOLIN
- YU ZHIHONG
Assignees
- 深圳市慧明眼镜有限公司
Dates
- Publication Date
- 20260512
- Application Date
- 20260130
Claims (10)
- 1. An AI digital lens generating method based on a 3D eyeball digital model is characterized by comprising the following steps: Acquiring original parameter data of an eyeball to be detected by utilizing multi-mode scanning, wherein the original parameter data comprises SS-OCT volume data, ultra-high-speed self-adaptive optical AO dynamic aberration data and Placido-Scheimpflug fusion data; constructing a 3D eyeball digital model by utilizing the original parameter data, inputting the 3D eyeball digital model into a pre-trained joint network, and extracting eyeball refractive parameters and 8-order Zernike high-order aberration coefficients, wherein the joint network comprises a Diffusion branch for processing anisotropic characteristics in the SS-OCT volume data and a transform branch for capturing global context information of the Placido-Scheimpflug fusion data; Inputting the eyeball refraction parameters and the high-order aberration coefficients as generating conditions, generating a collaborative design model of an antagonism network GAN and a multi-objective non-ordering genetic algorithm NSGA-II, generating a diversified initial lens curved surface scheme by using the GAN as an initial population, and performing Pareto front search by using the NSGA-II under the condition of meeting optical, mechanical and aesthetic multi-constraint conditions to determine an optimal lens free curved surface height map; and controlling digital lens production by computer-aided manufacturing (CAM) based on the optimal lens freeform surface height map, and carrying out virtual fitting verification by combining an Augmented Reality (AR) technology.
- 2. The method of claim 1 wherein the ultra-high speed adaptive optics AO dynamic aberration data is acquired by an AO system of 38Hz bandwidth; The AO system adopts a discontinuous exposure scheme, and generates a synchronous trigger signal through an FPGA main control unit so that closed loop delay of wavefront sensor measurement, FPGA processing and deformable mirror correction is less than 5ms.
- 3. The method of claim 1, wherein the diffration branch employs a 3D-based U-Net architecture employing an asymmetric convolution kernel for anisotropic features of the SS-OCT volume data, comprising: A 3X 3 standard convolution kernel for capturing transverse spatial features, corresponding to the X-Y plane; a 5 x 1 convolution kernel for capturing axial depth features, corresponding to the Z axis; In the 3D Diffusion process, the Diffusion branch adopts a depth perception Diffusion mechanism, and a characteristic evolution formula based on the depth perception Diffusion mechanism is expressed as follows: Wherein, the method comprises the steps of, Is the first Layer number The characteristics of the individual elements are used to determine, In order to be a depth-diffusion coefficient, For the voxel axial depth normalized to 0,1, Is a characteristic energy gradient.
- 4. The method of claim 3, wherein the fransformer branch employs Vision Transformer architecture, and the Vision Transformer architecture utilizes a radial attention mechanism to enhance feature capture of Placido ring data in the Placido-Scheimpflug fusion data, and the radial attention weight calculation formula is expressed as follows: Wherein, the method comprises the steps of, In order for the radial attention to be directed, And (3) with The query vector and the key vector are respectively, As the dimension of the key vector, As a function of radial distance.
- 5. The method of claim 4, wherein the federated network further comprises a cross-modal attention fusion layer for aligning the 3D feature map of the diffration branch output with the 2D feature sequence of the transform branch output, the fusion formula being: Wherein, the method comprises the steps of, As the multi-modal feature vector after fusion, In order to cross-modal attention weights, The weights are preserved for the OCT features, Is the 2D feature sequence The number of tokens to be used in the process of the present invention, Is the third in the 3D characteristic diagram A token.
- 6. The method of claim 1, wherein the GAN is based on StyleGAN's 2 architecture, and the GAN generator injects a style vector containing refractive and aberration parameters into the synthesis network through an adaptive instance normalization AdaIN layer, formulated as follows: Wherein, the method comprises the steps of, To synthesize the input feature map in the network, As a result of the fact that the style vector, And The mean and standard deviation of the feature map are respectively, And Is a scaling and panning parameter learned from the style vector.
- 7. The method of claim 6, wherein the generator is further embedded with a hard constraint processing mechanism that penalizes parameters outside a preset range by constraint loss functions of the arbiter, the constraint loss functions being expressed as follows: Wherein, the method comprises the steps of, The total loss of constraint is determined by, In order to account for the loss of thickness constraint, For the weight coefficient of the thickness constraint loss, In order to account for the loss of curvature constraint, A weight coefficient lost for curvature constraint, wherein the range of the thickness constraint is defined as The curvature constraint is defined as 。
- 8. The method of claim 1, wherein the NSGA-II optimizes three conflicting objective functions during evolution, including an optical performance objective function, a mechanical performance objective function, and an aesthetic performance objective function; the optical performance target is constructed by taking the minimum of the total root mean square of the high-order aberration, the minimum of the complement of the average value of the modulation transfer function and the minimum of the energy distribution width of the point spread function as targets; The mechanical performance targets are constructed with the aim of minimizing the standard deviation of lens thickness distribution, minimizing the stress concentration degree and minimizing the normalized weight; the aesthetic performance target is constructed with the aim of minimizing the appearance evaluation negative index, minimizing the ergonomic evaluation negative index, and minimizing the wear comfort evaluation negative index.
- 9. The method of claim 8, wherein the NSGA-II employs a bi-directional feedback co-ordination mechanism with the GAN: And the feasible solution generated by the GAN and meeting the hard constraint is used as the initial population of the NSGA-II, and the Pareto optimal solution generated by the NSGA-II is fed back to the training set of the GAN to perform incremental learning.
- 10. An AI digital lens generation system based on a 3D eyeball digital model, the system comprising: the system comprises an original data acquisition module, a data processing module and a data processing module, wherein the original data acquisition module is used for acquiring original parameter data of an eyeball to be detected by utilizing multi-mode scanning, and the original parameter data comprise SS-OCT volume data, ultra-high-speed self-adaptive optical AO dynamic aberration data and Placido-Scheimpflug fusion data; The combined network support module is used for constructing a 3D eyeball digital model by utilizing the original parameter data, inputting the 3D eyeball digital model into a pre-trained combined network, extracting eyeball refractive parameters and 8-order Zernike high-order aberration coefficients, wherein the combined network comprises a Diffusion branch for processing anisotropic characteristics in the SS-OCT volume data and a transform branch for capturing global context information of the Placido-Scheimpflug fusion data; The collaborative design model support module is used for inputting the eyeball refractive parameters and the high-order aberration coefficients as generating conditions, generating a collaborative design model of an antagonism network GAN and a multi-objective non-ordering genetic algorithm NSGA-II, generating a diversified initial lens curved surface scheme by using the GAN as an initial population, and performing Pareto front search by using the NSGA-II under the condition of meeting multiple constraints of optics, machinery and aesthetics to determine an optimal lens free-form surface height diagram; And the product output and verification module is used for controlling the production of the digital lens by computer-aided manufacturing CAM based on the optimal lens free-form surface height map and carrying out virtual fitting verification by combining an augmented reality AR technology.
Description
AI digital lens generation method and system based on 3D eyeball digital model Technical Field The application relates to the field of optical lens design, in particular to an AI digital lens generation method and system based on a 3D eyeball digital model. Background Ametropia (e.g., myopia, hyperopia, astigmatism) has become the most common vision health problem worldwide. It is counted that about 28 hundred million people worldwide are currently affected by ametropia, and this number is expected to climb to 50 hundred million in 2050. With the rapid evolution of ophthalmic imaging technologies such as optical coherence tomography (Optical Coherence Tomography, OCT), adaptive Optics (AO), and deep learning algorithms, human vision correction is transitioning from "generalized standard design" to "fine personalized customization" aimed at providing higher-dimensional optical compensation solutions for specific cases of high myopia and complex astigmatism. Currently, the primary refractive correction means in clinic include wearing conventional frame lenses, contact lenses, or performing refractive surgery. In terms of digital design, the prior art has begun to attempt to introduce artificial intelligence aided optimization. For example, there are solutions to smooth the curvature of the transition region between the optic zone and the edge zone of the lens through a fully connected neural network, or to construct a machine learning model based on historical corneal data to "match" and "select" the closest existing product for the patient from an existing lens brand and model library. However, when the conventional refraction correction scheme is used for coping with the complex vision requirement, obvious defects and defects exist, and the actual requirement cannot be met. Therefore, the application provides an AI digital lens generating method and system based on a 3D eyeball digital model so as to solve one of the technical problems. Disclosure of Invention The application aims to provide an AI digital lens generation method and system based on a 3D eyeball digital model, which can solve at least one technical problem. The specific scheme is as follows: According to a first aspect of the present application, there is provided a method for generating an AI digital lens based on a 3D eyeball digital model, comprising: The method comprises the steps of obtaining original parameter data of an eyeball to be detected by utilizing multi-mode scanning, constructing a 3D eyeball digital model by utilizing the original parameter data, wherein the original parameter data comprises SS-OCT volume data, ultra-high speed self-adaptive optical AO dynamic aberration data and Placido-Scheimpflug fusion data, inputting the 3D eyeball digital model into a pre-trained joint network, extracting eyeball refractive parameters and 8-order Zernike high-order aberration coefficients, wherein the joint network comprises a Diffusion branch for processing anisotropic characteristics in the SS-OCT volume data, a transducer branch for capturing global context information of the Placido-Scheimpflug fusion data, taking the eyeball refractive parameters and the high-order aberration coefficients as generating conditions, inputting the eyeball refractive parameters and the high-order aberration coefficients to generate a collaborative design model of an anti-network GAN and a multi-target non-ordered genetic algorithm NSGA-II, generating a diversified initial lens curved surface scheme by utilizing the GAN as an initial population, searching for a Pareto front edge under the conditions of meeting optics, machinery and multiple constraints, determining an optimal free-form surface, and carrying out free-form-run Augmented Reality (AR) based on the optimal free-form surface, and performing high-order augmented reality (reality) virtual reality (wear-free-reality) test (wear) technology, and manufacturing a virtual reality (CAMP) model, and an augmented reality (virtual reality) model, and a free-free reality (virtual reality) model-free reality (virtual reality) based reality (virtual reality) and a virtual reality (virtual reality). In one embodiment, the ultra-high-speed self-adaptive optical AO dynamic aberration data is acquired through an AO system with a bandwidth of 38Hz, the AO system adopts a discontinuous exposure scheme, and a synchronous trigger signal is generated through an FPGA main control unit, so that closed loop delay of wavefront sensor measurement, FPGA processing and deformable mirror correction is less than 5ms. In one embodiment, the Diffusion branch adopts a 3D-based U-Net architecture, and the U-Net architecture adopts an asymmetric convolution kernel for anisotropic features of the SS-OCT volume data, wherein the asymmetric convolution kernel comprises a 3X 3 standard convolution kernel for capturing transverse spatial features, a corresponding X-Y plane, a 5X 1 convolution kernel for capturing axial depth features