CN-116612203-B - Cell imaging method and device based on deep learning
Abstract
The invention relates to the field of deep learning, and discloses a cell imaging method and device based on deep learning, wherein the method comprises the steps of carrying out image reconstruction on fundus images by using a trained front neural network model to obtain reconstructed images of the fundus images; the method comprises the steps of performing image registration on a reconstructed image and a cell imaging image to obtain a registered reconstructed image and a registered cell imaging image, fusing the registered reconstructed image and the registered cell imaging image to obtain a fused image, taking the reconstructed image as input data of a pre-trained post-neural network model, taking the cell imaging image and the fused image as target data of the pre-trained post-neural network model, performing model training on the pre-trained post-neural network model through the input data and the target data, and performing cell imaging on a fundus image of a current patient by utilizing the trained pre-neural network model and the trained post-neural network model to obtain a cell imaging result. The invention can meet clinical demands through cell imaging.
Inventors
- Qu Chubin
- AN LIN
- QIN JIA
Assignees
- 广东唯仁医疗科技有限公司
- 唯仁医疗(佛山)有限公司
- 唯智医疗科技(佛山)有限公司
Dates
- Publication Date
- 20260505
- Application Date
- 20230523
Claims (8)
- 1. A method of cell imaging based on deep learning, the method comprising: Collecting fundus images of a historical patient, and collecting a cell imaging image of an eye corresponding to the fundus images; performing image reconstruction on the fundus image by using the trained front neural network model to obtain a reconstructed image of the fundus image; Performing image registration on the reconstructed image and the cell imaging image to obtain a registered reconstructed image and a registered cell imaging image, and fusing the registered reconstructed image and the registered cell imaging image to obtain a fused image; Taking the reconstructed image as input data of a pre-trained post-neural network model, and taking the cell imaging image and the fusion image as target data of the pre-trained post-neural network model, so as to perform model training on the pre-trained post-neural network model through the input data and the target data, and obtain a trained post-neural network model; performing cell imaging on the fundus image of the current patient by using the trained pre-neural network model and the trained post-neural network model to obtain a cell imaging result; The method for reconstructing the fundus image by using the trained pre-neural network model further comprises the following steps before the reconstructed fundus image is obtained: acquiring a training fundus image, and performing resolution reconstruction on the training fundus image by using a generating network in a pre-trained pre-neural network model to obtain a pseudo high-resolution image; Constructing a feature map of the pseudo high-resolution image in a discrimination network in the pre-trained pre-neural network model to obtain a pseudo feature map, and constructing a feature map of a true high-resolution image corresponding to the pseudo high-resolution image to obtain a true feature map; And calculating a loss value of the generated network based on the pseudo feature map and the true feature map by using the following formula to obtain the generated network loss: Wherein, the Indicating that a network loss is to be generated, The true feature map is represented by a graph of the true feature, Representing the pseudo-feature map, i representing the sequence numbers of the features in the feature map, Representing features in the feature map; based on the pseudo feature map and the true feature map, calculating a loss value of the discrimination network by using the following formula to obtain the discrimination network loss: Wherein, the Representing the loss of the said discriminatory network, Indicating that a network loss is to be generated, The true feature map is represented by a graph of the true feature, Representing the pseudo-feature map; When the generated network loss and the discrimination network loss are both larger than preset loss, updating the network weight of the pre-trained pre-neural network model to obtain a pre-neural network model after updating the network weight; obtaining the trained pre-neural network model when the generated network loss and the discrimination network loss of the pre-neural network model after updating the network weight are not greater than the preset loss; the training of the model of the pre-trained post-neural network model is performed through the input data and the target data to obtain a trained post-neural network model, and the training method comprises the following steps: constructing the diffusion step number and Gaussian noise image of the target data; Forward diffusion is carried out on the target data based on the diffusion step number, and a forward diffusion diagram is obtained; superposing the input data and the forward diffusion map to obtain an input-diffusion superposition map; Performing image noise reduction processing on the input-diffusion superposition graph based on the diffusion step number to obtain a noise reduction image; calculating an average square loss between the noise reduced image and the gaussian noise image using the formula: Wherein, the Representing the average square loss of the product, Representing the image of the gaussian noise, Representing the noise-reduced image of the object, A serial number representing the pixel points in the image, and N representing the total number of the pixel points in the image; And carrying out model training on the pre-trained post-neural network model by utilizing the average square loss to obtain a trained post-neural network model.
- 2. The method of claim 1, wherein the performing resolution reconstruction of the training fundus image using a generation network in a pre-trained pre-neural network model to obtain a pseudo high resolution image comprises: extracting shallow features of the training fundus image by using a first convolution layer in the generation network to obtain a shallow feature map; Carrying out parameter correction on the shallow feature map by utilizing a residual layer in the generating network to obtain a corrected feature map; Extracting deep features of the correction feature map by using residual blocks in the generation network to obtain a deep feature map; Performing characteristic convolution processing on the deep characteristic map by using a second convolution layer in the generating network to obtain a second convolution map; superposing the correction characteristic diagram and the second convolution diagram to obtain a superposition characteristic diagram; and carrying out resolution reconstruction on the superposition feature map to obtain the pseudo high-resolution image.
- 3. The method according to claim 1, wherein constructing the feature map of the pseudo high resolution image in the discrimination network in the pre-trained pre-neural network model, to obtain a pseudo feature map, comprises: extracting features of the pseudo high-resolution image to obtain a high-resolution feature map; performing dimension adjustment on the high-resolution feature map to obtain a dimension adjustment feature map; Performing linear mapping on the dimension adjustment feature map to obtain a linear mapping feature map; And calculating the classification probability of the linear mapping feature map, and taking an image formed by the classification probability as the pseudo feature map.
- 4. The method of claim 1, wherein said fusing the registered reconstructed image with the registered cell imaging map to obtain a fused image comprises: image registration is carried out among a plurality of registration cell imaging images in the registration cell imaging images, so that a secondary registration cell imaging image is obtained; Performing image fusion on a plurality of secondary registration cell imaging images in the secondary registration cell imaging images to obtain a fused cell imaging image; and fusing the fused cell imaging image with the registration reconstructed image to obtain the fused image.
- 5. The method of claim 4, wherein said fusing said fused cell imaging map with said registered reconstructed image to obtain said fused image comprises: Performing multi-scale wavelet decomposition on the fused cell imaging image and the registration reconstructed image respectively to obtain a low-frequency component and a high-frequency component of the fused cell imaging image and a low-frequency component and a high-frequency component of the registration reconstructed image; performing component fusion on the low-frequency component of the fused cell imaging image and the low-frequency component of the registration reconstructed image to obtain a low-frequency fused image; carrying out component fusion on the high-frequency component of the fused cell imaging image and the high-frequency component of the registration reconstructed image to obtain a high-frequency fused image; and fusing the low-frequency fused image and the high-frequency fused image to obtain the fused image.
- 6. The method of claim 1, wherein said constructing the diffusion steps of the target data and the gaussian noise image thereof comprises: selecting the diffusion step number from preset training times; and constructing a Gaussian noise image of the feature vector.
- 7. The method of claim 1, wherein performing image denoising processing on the input-diffusion overlay based on the number of diffusion steps to obtain a denoised image comprises: Acquiring a feature vector corresponding to the diffusion step number and a pre-trained post neural network model; taking the characteristic vector and input data in the input-diffusion superposition graph as supervision vectors of the pre-trained post-neural network model; And based on the supervision vector, performing image noise reduction processing on the input-diffusion superposition graph by using the pre-trained post neural network model to obtain the noise reduction image.
- 8. A deep learning-based cell imaging apparatus, the apparatus comprising: the image acquisition module is used for acquiring fundus images of historical patients and acquiring cell imaging images of eyes corresponding to the fundus images; the image reconstruction module is used for carrying out image reconstruction on the fundus image by utilizing the trained pre-neural network model to obtain a reconstructed image of the fundus image; The image fusion module is used for carrying out image registration on the reconstructed image and the cell imaging image to obtain a registered reconstructed image and a registered cell imaging image, and fusing the registered reconstructed image and the registered cell imaging image to obtain a fused image; The model training module is used for taking the reconstructed image as input data of a pre-trained post-neural network model, taking the cell imaging image and the fusion image as target data of the pre-trained post-neural network model, and carrying out model training on the pre-trained post-neural network model through the input data and the target data to obtain a trained post-neural network model; the cell imaging module is used for performing cell imaging on the fundus image of the current patient by utilizing the trained pre-neural network model and the trained post-neural network model to obtain a cell imaging result; The method for reconstructing the fundus image by using the trained pre-neural network model further comprises the following steps before the reconstructed fundus image is obtained: acquiring a training fundus image, and performing resolution reconstruction on the training fundus image by using a generating network in a pre-trained pre-neural network model to obtain a pseudo high-resolution image; Constructing a feature map of the pseudo high-resolution image in a discrimination network in the pre-trained pre-neural network model to obtain a pseudo feature map, and constructing a feature map of a true high-resolution image corresponding to the pseudo high-resolution image to obtain a true feature map; And calculating a loss value of the generated network based on the pseudo feature map and the true feature map by using the following formula to obtain the generated network loss: Wherein, the Indicating that a network loss is to be generated, The true feature map is represented by a graph of the true feature, Representing the pseudo-feature map, i representing the sequence numbers of the features in the feature map, Representing features in the feature map; based on the pseudo feature map and the true feature map, calculating a loss value of the discrimination network by using the following formula to obtain the discrimination network loss: Wherein, the Representing the loss of the said discriminatory network, Indicating that a network loss is to be generated, The true feature map is represented by a graph of the true feature, Representing the pseudo-feature map; When the generated network loss and the discrimination network loss are both larger than preset loss, updating the network weight of the pre-trained pre-neural network model to obtain a pre-neural network model after updating the network weight; obtaining the trained pre-neural network model when the generated network loss and the discrimination network loss of the pre-neural network model after updating the network weight are not greater than the preset loss; the training of the model of the pre-trained post-neural network model is performed through the input data and the target data to obtain a trained post-neural network model, and the training method comprises the following steps: constructing the diffusion step number and Gaussian noise image of the target data; Forward diffusion is carried out on the target data based on the diffusion step number, and a forward diffusion diagram is obtained; superposing the input data and the forward diffusion map to obtain an input-diffusion superposition map; Performing image noise reduction processing on the input-diffusion superposition graph based on the diffusion step number to obtain a noise reduction image; calculating an average square loss between the noise reduced image and the gaussian noise image using the formula: Wherein, the Representing the average square loss of the product, Representing the image of the gaussian noise, Representing the noise-reduced image of the object, A serial number representing the pixel points in the image, and N representing the total number of the pixel points in the image; And carrying out model training on the pre-trained post-neural network model by utilizing the average square loss to obtain a trained post-neural network model.
Description
Cell imaging method and device based on deep learning Technical Field The invention relates to the field of deep learning, in particular to a cell imaging method and device based on deep learning. Background During the progression of glaucoma, ganglion cells are lost each year, and even in subjects without ocular disease, a small proportion of these cells die (0.19-0.72%/year) as part of normal aging, so it is clinically necessary to distinguish between disease-induced ganglion cell loss and aging-induced ganglion cell loss as early as possible and intervene by appropriate therapy to minimize cell death. The quantitative characteristics of the cell level of retinal Ganglion Cells (GCs) are an important biomarker, which can improve diagnosis and therapeutic monitoring of neurodegenerative diseases such as glaucoma, parkinson's disease and alzheimer's disease, however, due to limited resolution, individual GCs cannot be visualized by commonly used ophthalmic imaging systems, and even if such as Optical Coherence Tomography (OCT), the evaluation of individual GCs is limited to total layer thickness analysis, and adaptive optical OCT (AO-OCT) can image individual retinal GCs in vivo, but its acquisition area is small, time consuming and difficult to clinical application. Thus, there is a need for a cell imaging protocol that can meet clinical needs. Disclosure of Invention In order to solve the problems, the invention provides a cell imaging method and a cell imaging device based on deep learning, which can meet clinical requirements through cell imaging. In a first aspect, the present invention provides a cell imaging method based on deep learning, comprising: Collecting fundus images of a historical patient, and collecting a cell imaging image of an eye corresponding to the fundus images; performing image reconstruction on the fundus image by using the trained front neural network model to obtain a reconstructed image of the fundus image; Performing image registration on the reconstructed image and the cell imaging image to obtain a registered reconstructed image and a registered cell imaging image, and fusing the registered reconstructed image and the registered cell imaging image to obtain a fused image; Taking the reconstructed image as input data of a pre-trained post-neural network model, and taking the cell imaging image and the fusion image as target data of the pre-trained post-neural network model, so as to perform model training on the pre-trained post-neural network model through the input data and the target data, and obtain a trained post-neural network model; and performing cell imaging on the fundus image of the current patient by using the trained pre-neural network model and the trained post-neural network model to obtain a cell imaging result. In a possible implementation manner of the first aspect, before the performing image reconstruction on the fundus image by using the trained pre-neural network model to obtain a reconstructed image of the fundus image, the method further includes: acquiring a training fundus image, and performing resolution reconstruction on the training fundus image by using a generating network in a pre-trained pre-neural network model to obtain a pseudo high-resolution image; Constructing a feature map of the pseudo high-resolution image in a discrimination network in the pre-trained pre-neural network model to obtain a pseudo feature map, and constructing a feature map of a true high-resolution image corresponding to the pseudo high-resolution image to obtain a true feature map; And calculating a loss value of the generated network based on the pseudo feature map and the true feature map by using the following formula to obtain the generated network loss: Wherein H (p, q) represents a generated network loss, p (x i) represents the true feature map, q (x i) represents the false feature map, i represents the sequence number of the feature in the feature map, and x i represents the feature in the feature map; based on the pseudo feature map and the true feature map, calculating a loss value of the discrimination network by using the following formula to obtain the discrimination network loss: loss=∣y-f|+H(p,q) Wherein loss represents the discrimination network loss, H (p, q) represents generation network loss, y represents the true feature map, and f represents the false feature map; When the generated network loss and the discrimination network loss are both larger than preset loss, updating the network weight of the pre-trained pre-neural network model to obtain a pre-neural network model after updating the network weight; And obtaining the trained pre-neural network model when the generated network loss and the discrimination network loss of the pre-neural network model after the network weight is updated are not greater than the preset loss. In a possible implementation manner of the first aspect, the performing resolution reconstruction on the training fundus ima