CN-121982477-A - AI-based 3D eyeball digital model analysis and optical compensation method and system
Abstract
The application provides an AI-based 3D eyeball digital model analysis and optical compensation method and system. The method comprises the steps of obtaining multi-modal data of an eyeball to be detected, unifying the multi-modal data to the same space coordinate system by adopting a multi-equipment data coordinate system and a registration algorithm to obtain registered data, inputting the registered data into a pre-built multi-modal feature extraction network, extracting cornea morphological features based on ResNet-50 branches, extracting cornea interlayer structural features based on U-Net branches, carrying out feature enhancement and fusion based on a cross-modal attention module to generate fusion features, generating a 3D eyeball digital model conforming to a real anatomical structure based on a generation countermeasure network and voxel rendering technology by utilizing the fusion features, and calculating an optimal optical compensation scheme based on the 3D eyeball digital model. The application provides a high-precision human eye optical compensation scheme.
Inventors
- PI LIXIN
- LUO YIMING
- OU JINLIN
- MAO JIAWEI
Assignees
- 深圳市慧明眼镜有限公司
Dates
- Publication Date
- 20260505
- Application Date
- 20260123
Claims (10)
- 1. An AI-based 3D eyeball digital model analysis and optical compensation method, which is characterized by comprising the following steps: acquiring multi-mode data of an eyeball to be detected, wherein the multi-mode data comprises cornea topographic map data acquired by a Scheimpflug camera, cornea tomographic data acquired by SD-OCT and wavefront aberration data acquired by a wavefront aberration meter; Unifying the multi-mode data to the same space coordinate system by adopting a multi-equipment data coordinate system and a registration algorithm to obtain registered data; Inputting the registered data into a pre-constructed multi-mode feature extraction network, extracting cornea morphological features based on ResNet-50 branches, extracting cornea interlayer structural features based on U-Net branches, and carrying out feature enhancement and fusion based on a cross-mode attention module to generate fusion features; Generating a 3D eyeball digital model conforming to a real anatomical structure based on a generation countermeasure network and voxel rendering technology by utilizing the fusion characteristics; and calculating an optimal optical compensation scheme based on the 3D eyeball digital model.
- 2. The method of claim 1, wherein generating a 3D eyeball digital model conforming to a real anatomy based on generating an countermeasure network and voxel rendering technique using the fusion features comprises: Performing feature space conversion on the fusion features by using a mapping network of the generator to obtain 3D space latent variable distribution; taking 3D space latent variable distribution as reference data, and generating a volume entity by utilizing the voxel rendering technology to obtain a 3D eyeball volume model; Carrying out structural analysis on the 3D eyeball volume model by utilizing a discriminator in combination with a built-in real eyeball anatomical database, and outputting a model anatomical structure consistency score; converting the model anatomical structure consistency score into geometric reconstruction loss in a weighted loss function, correcting the internal parameters of the generator through a back propagation algorithm, and then regenerating a 3D eyeball volume model; and outputting the newly generated 3D eyeball volume model as the 3D eyeball digital model conforming to the real anatomical structure when the prefabrication period of the generating countermeasure network is reached.
- 3. The method of claim 2, wherein the weighted loss function is based on a multi-task learning framework design, consisting of the geometric reconstruction loss weighted with refractive parameter predictive loss; The geometric reconstruction loss and the refraction parameter prediction loss are subjected to weight assignment based on a balance factor so as to balance the matching degree of the 3D eyeball volume model between geometric form reality and optical refraction characteristics; the balance factors are statically allocated in advance based on experience values, a dynamic weighting strategy is adopted in the execution process of the generation countermeasure network, and the weight ratio is adjusted in real time according to the descending gradient of each loss.
- 4. The method of claim 1, wherein the calculating an optimal optical compensation scheme based on the 3D eyeball digital model comprises: Extracting key refraction parameters from the 3D eyeball digital model as initial input data; Constructing a lens parameter vector to be optimized based on the key refraction parameters, and defining an optimization variable range of the lens parameter vector; Predicting an optical index vector corresponding to the lens parameter vector by using a deep learning agent model; And adopting a multi-objective optimization algorithm, carrying out iterative adjustment on the lens parameter vector according to the optical index vector until a preset optimizing termination condition is met, and outputting the current adjustment result as the optimal optical compensation scheme.
- 5. The method of claim 4, wherein the extracting key refractive parameters from the 3D digital model of the eyeball comprises: automatically identifying and extracting low-order aberration parameters and high-order aberration parameters from the 3D eyeball digital model by using a regression network based on an attention mechanism; Wherein the low order aberration parameters include sphere power, cylinder power, and astigmatism axis, and the high order aberration parameters are characterized by Zernike coefficients, including spherical aberration, coma, and clover aberrations.
- 6. The method of claim 4, wherein the deep learning agent model employs a neural network comprising 5 fully connected layers, each layer comprising 256 neurons, and uses a ReLU activation function; The input layer of the deep learning agent model is a lens parameter vector, and comprises a curvature radius, a thickness, a refractive index and an aspheric coefficient; the output layer of the deep learning agent model is an optical index vector and comprises a modulation transfer function MTF, a point spread function PSF and a wavefront aberration WFE; The deep learning agent model is finished through pre-training of multiple groups of simulation data, and each group of simulation data comprises a group of lens parameters and corresponding optical index measurement values; The deep learning agent model adopts mean square error added with L2 regularization as a loss function during training.
- 7. The method of claim 6, wherein the lens parameter vector comprises a radius of curvature, a center thickness, a material refractive index, and a free-form surface aspheric coefficient of the lens; the optical index vector includes a modulation transfer function MTF, a full width at half maximum FWHM of a point spread function PSF, a wavefront aberration WFE, and a chromatic aberration index.
- 8. The AI-based 3D eye digital model analysis and optical compensation method of claim 7, wherein the multi-objective optimization algorithm employs a NSGA-II-based multi-objective optimization algorithm, and a Pareto front comprising a plurality of trade-off solutions is generated by iteration; Wherein the multi-objective optimization algorithm comprises at least the following optimization objectives: maximizing the value of the MTF at different spatial frequencies; Minimizing FWHM and scattering energy of the PSF; Minimizing the WFE; the lens thickness and weight are limited to within a preset safety range.
- 9. The method of claim 1, wherein the optical compensation scheme is based on a freeform model description for correcting both lower and higher order aberrations; the surface equation of the free-form surface model is expressed as follows: Wherein, the method comprises the steps of, In coordinates for the lens surface The height value at which the height value is to be calculated, Is a free-form surface coefficient.
- 10. An AI-based 3D eyeball digital model analysis and optical compensation system, comprising: The multimode data acquisition module is used for acquiring multimode data of the eyeball to be detected, wherein the multimode data comprise cornea topographic map data acquired by a Scheimpplug camera, cornea tomographic data acquired by SD-OCT and wavefront aberration data acquired by a wavefront aberration meter; The coordinate registration module is used for unifying the multi-mode data to the same space coordinate system by adopting a multi-equipment data coordinate system-registration algorithm to obtain registered data; The feature fusion module is used for inputting the registered data into a pre-constructed multi-mode feature extraction network, extracting cornea morphological features based on ResNet-50 branches, extracting cornea interlayer structural features based on U-Net branches, and carrying out feature enhancement and fusion based on a cross-mode attention module to generate fusion features; the 3D model reconstruction module is used for generating a 3D eyeball digital model conforming to a real anatomical structure based on a generation countermeasure network and voxel rendering technology by utilizing the fusion characteristics; and the parameter optimization module is used for calculating an optimal optical compensation scheme based on the 3D eyeball digital model.
Description
AI-based 3D eyeball digital model analysis and optical compensation method and system Technical Field The application relates to the field of vision correction, in particular to an AI-based 3D eyeball digital model analysis and optical compensation method and system. Background Ametropia (such as myopia, hyperopia and astigmatism) is one of the most common vision health problems worldwide, affecting the quality of life of about 28 hundred million people. With the change of life style of people, the demands of society on vision correction are no longer limited to basic vision improvement, but are moving towards higher definition and more comfortable 'supernormal vision'. Currently, a variety of ophthalmic scanning devices are used clinically to perform pre-operative examinations on patients. For example, corneal topography is acquired using Placido rings or a Scheimpflug camera to assess corneal surface curvature, full-width tomographic images of the cornea and anterior chamber depth are acquired using Optical Coherence Tomography (OCT) techniques, and aberration distribution of the eyeball is measured using a wavefront aberrometer. In the correction scheme design, the traditional method generally calculates lens parameters or surgical cutting amount based on an empirical formula such as SRK/T, holladay. However, the above solution still has significant limitations in practical application, and cannot meet the practical use requirements. Therefore, the present application provides a method and a system for analyzing and optically compensating an AI-based 3D eyeball digital model to solve one of the above-mentioned problems. Disclosure of Invention The application aims to provide an AI-based 3D eyeball digital model analysis and optical compensation method and system, which can solve at least one technical problem. The specific scheme is as follows: According to a first aspect of the present application, there is provided a 3D eyeball digital model analysis and optical compensation method based on AI, comprising: The method comprises the steps of obtaining multi-modal data of an eyeball to be detected, wherein the multi-modal data comprise cornea topographic map data collected by a Scheimpflug camera, cornea tomographic data collected by an SD-OCT and wavefront aberration data collected by a wavefront aberration meter, unifying the multi-modal data to the same space coordinate system by adopting a multi-device data coordinate system and a registration algorithm to obtain registered data, inputting the registered data into a pre-built multi-modal feature extraction network, extracting cornea morphological features based on ResNet-50 branches, extracting cornea interlayer structural features based on U-Net branches, carrying out feature enhancement and fusion based on a cross-modal attention module to generate fusion features, utilizing the fusion features to generate a 3D eyeball digital model conforming to a real anatomical structure based on a generation countermeasure network and voxel rendering technology, and calculating an optimal optical compensation scheme based on the 3D eyeball digital model. In one embodiment, the 3D eyeball digital model conforming to the real anatomical structure is generated based on a generation countermeasure network and a voxel rendering technology by utilizing the fusion characteristics, and the 3D eyeball digital model comprises the steps of performing feature space conversion on the fusion characteristics by utilizing a mapping network of a generator to obtain 3D space latent variable distribution, performing volume entity generation by utilizing the voxel rendering technology by taking the 3D space latent variable distribution as reference data to obtain a 3D eyeball volume model, performing structure analysis on the 3D eyeball volume model by utilizing a discriminator in combination with a built-in real eyeball anatomical database, outputting a model anatomical structure consistency score, converting the model anatomical structure consistency score into geometric reconstruction loss in a weighted loss function, correcting internal parameters of the generator through a back propagation algorithm, and then generating the 3D eyeball volume model again, and outputting the latest generated 3D eyeball volume model as the 3D eyeball digital model conforming to the real anatomical structure when the prefabrication period of the generation countermeasure network is reached. In one embodiment, the weighted loss function is designed based on a multi-task learning framework and consists of the geometric reconstruction loss and the refraction parameter prediction loss in a weighted mode, wherein the geometric reconstruction loss and the refraction parameter prediction loss are subjected to weight assignment based on a balance factor to balance the matching degree of the 3D eyeball volume model between geometric form reality and optical refraction characteristics, the balance factor is