US-12626331-B2 - Translational rapid ultraviolet-excited sectioning tomography assisted with deep learning
Abstract
Translational rapid ultraviolet-excited sectioning tomography (TRUST) applies ultraviolet (UV) excitation to a sample and images fluorescence and autofluorescence emission for tomographically imaging the sample. Deep-learning neural networks are used to achieve higher imaging speed and imaging resolution. In one use, fluorescence images acquired with relatively low imaging resolution can be transformed into high-resolution images through the first conditional generative adversarial network (cGAN), a super-resolution neural network (e.g., ESRGAN), which is also helpful for reducing the image scanning time. In another use, the second cGAN, such as Pix2Pix, is used to realize virtual optical sectioning to enhance the axial resolution of the imaging system. Compared to the conventional pattern illumination methods (e.g., HiLo microscopy), which need at least two shots for each field of view, the imaging speed is also times higher because only one shot under the uniform-illumination condition of UV irradiation is required.
Inventors
- Tsz Wai Wong
- Wentao YU
- Yan Zhang
- Lei Kang
Assignees
- THE HONG KONG UNIVERSITY OF SCIENCE AND TECHNOLOGY
Dates
- Publication Date
- 20260512
- Application Date
- 20220829
Claims (8)
- 1 . A method for tomographically imaging a sample with ultraviolet (UV) excitation to yield a three-dimensional (3D) fluorescence/autofluorescence image volume, the method comprising the steps of: (a) focal scanning of an exposed surface layer of the sample which is immersed under staining solutions for labelling and illuminated by UV light, to yield a sequence of a low-resolution (LR) translational rapid ultraviolet-excited sectioning tomography (TRUST) images; (b) transforming the sequence of LR TRUST images into the sequence of a high-resolution (HR) TRUST images assisted with a second conditional generative adversarial network (cGAN), which improves a resolution of an imaging system or reduces an imaging time compared to directly obtaining the sequence of HR TRUST images, wherein the second cGAN is a super-resolution (SR) neural network; (c) transforming the sequence of HR TRUST images into a sequence of virtual optically-sectioned Patterned-TRUST images with a first cGAN to enhance an axial resolution while saving time compared with the patterned-illumination microscopy; (d) removing the imaged surface layer of the tissue block with mechanical sectioning and exposing a following/adjacent layer; and (e) repeating the steps (a)-(d) until the 3D fluorescence/autofluorescence imaging volume of the whole sample is obtained.
- 2 . The method of claim 1 , wherein the focal scanning of the exposed surface layer of a tissue block comprises the steps of: (f) obtaining a LR TRUST image that records fluorescence and autofluorescence emission of the individual section irradiated with UV light under a uniform-illumination condition; (g) moving the imaging device and/or or the tissue sample axially for a distance; and (h) repeating the steps (f) and (g) to yield the sequence of LR TRUST images of the sample surface layer.
- 3 . The method of claim 1 further comprising: training the first cGAN with a first training dataset which comprises a plurality of first training samples, wherein each training sample of the plurality of first training samples comprises a paired example of an ordinary TRUST image acquired under uniform illumination and a corresponding optically-sectioned image.
- 4 . The method of claim 1 , wherein the second cGAN is selected to be a super-resolution generative adversarial network (SRGAN), an enhanced super-resolution generative adversarial network (ESRGAN), a content adaptive resampler (CAR), or another kind of SR deep learning network.
- 5 . A system for tomographically imaging a sample with ultraviolet (UV) excitation to yield a three-dimensional (3D) fluorescence image volume, the system comprising: an imaging subsystem realized as a translational rapid ultraviolet-excited sectioning tomography (TRUST) system or a Patterned-TRUST system for imaging the sample with UV excitation; and one or more computers for controlling the imaging subsystem and determining the 3D fluorescence image volume, the one or more computers being configured to: control the imaging subsystem to prepare a plurality of sections of the sample for imaging; perform focal scanning of each exposed surface layer once it has been stained, to yield a sequence of TRUST images of the individual section; transform the sequence of TRUST images into a sequence of high-resolution (HR) TRUST images with a second conditional generative adversarial network (cGAN), which is applied to enhance an image resolution or reduce an image scanning time; transform the sequence of HR TRUST images into a sequence of virtual optically-sectioned TRUST images with a first cGAN to enhance an axial resolution, or to save time compared with a patterned-illumination microscopy; and collect respective sequences of virtual optically-sectioned TRUST images to form the 3D fluorescence image volume; wherein in the focal scanning of each exposed surface layer of the sample, the one or more computers are further configured to: (a) obtain a TRUST image that records fluorescence and autofluorescence emission of the individual section irradiated with UV light under a uniform-illumination condition; (b) move the imaging device or tissue sample axially for a distance; and (c) repeat the steps (a) and (b) to yield a sequence of TRUST images of the individual section.
- 6 . The system of claim 5 , wherein the one or more computers are further configured to: train the first cGAN with the first training dataset, which comprises a plurality of first training samples, wherein each first training sample comprises a paired example of an ordinary TRUST image and its corresponding optically-sectioned TRUST image.
- 7 . The system of claim 5 , wherein in transforming the sequence of TRUST images into the sequence of HR TRUST images, the one or more computers are further configured to: use the second cGAN to transform the sequence of TRUST images into the sequence of HR TRUST images, wherein the second cGAN is a super-resolution (SR) neural network configured and trained to enhance input images in resolution, thereby reducing the imaging time compared to directly obtaining HR TRUST images.
- 8 . The system of claim 7 , wherein the second cGAN is selected to be a super-resolution generative adversarial network (SRGAN), an enhanced super-resolution generative adversarial network (ESRGAN), a content adaptive resampler (CAR), or another kind of super-resolution deep learning network.
Description
CROSS-REFERENCE TO RELATED APPLICATIONS This application claims priority to, and the benefit of, U.S. Provisional Patent Application No. 63/254,546 filed on Oct. 12, 2021, the disclosure of which is hereby incorporated by reference in its entirety. LIST OF ABBREVIATIONS 3D: three-dimensionalAO: acridine orangeCAR: content adaptive resamplercGAN: conditional generative adversarial networkCPU: central processing unitCNN: convolutional neural networkCT: computed tomographyDAPI: 4′,6-diamidino-2-phenylindoleDeep-TRUST: TRUST assisted with deep learningDMD: digital micromirror deviceESRGAN: enhanced super-resolution generative adversarial networkFFPE: formalin-fixed, paraffin-embeddedFOV: field of viewFWHM: full width at half maximumGPU: graphics processing unitHR: high-resolutionLD: laser diodeLED: light emitting diodeLR: low-resolutionMCU: microcontroller unitNA: numerical apertureNBF: neutral-buffered formalinPatterned-TRUST: TRUST with patterned illuminationPBS: phosphate-buffered salinePI: propidium iodideRAM: random access memoryRS: super-resolutionSRGAN: super-resolution generative adversarial networkTRUST: translational rapid ultraviolet-excited sectioning tomographyUV: ultraviolet TECHNICAL FIELD The present invention relates generally to tomographic imaging of fluorescence and autofluorescence emission of a sample irradiated with UV light. In particular, the present invention relates to Deep-TRUST, which uses neural networks to process TRUST images to improve imaging resolution and reduce imaging time. BACKGROUND It is still laborious and time-consuming to acquire 3D information of large biological samples with high resolution. For most 3D fluorescence microscopes, the time cost of tissue preparation for large samples can be extremely high (e.g., ˜2 weeks for whole mouse brain clearing or staining) [5]-[8]. Moreover, some tissue processing protocols can induce side effects and degrade imaging quality. For whole organ staining, it is difficult to optimize all involved chemical or physical parameters to realize a consistent staining appearance in both the central and peripheral areas for samples with different tissue types or sizes. As for optical clearing, there are still several challenges, such as morphological distortion of the sample [9] and toxicity of reagents [10]. Finally, some imaging systems require the scanned sample to be embedded in resin [11], [12] or paraffine [13] block, resulting in additional time cost and uneven shrinkage of the sample due to dehydration. As for label-free imaging systems, tissue staining is unnecessary, but several other issues must be addressed. To begin with, the imaging specificity may be lower. For example, the imaging contrast of soft tissue (e.g., muscle) can be problematic for micro-CT [14], [15]. Also, the entire experimental time cost of the label-free imaging system is not necessarily lower than that taken by fluorescence imaging systems, even with staining time counted. For example, light-sheet microscopy roughly costs two weeks (including clearing, staining, and optical scanning) for whole mouse brain imaging, while label-free photoacoustic microscopy [17] needs ˜2 months. There is a need in the art for an imaging technique that reduces image acquisition time while maintaining high imaging resolution and high imaging content at a low cost. SUMMARY The present invention is concerned with Deep-TRUST, which is related to implementing neural networks on the original TRUST image to enhance its resolution thereof or to realize virtual optical sectioning with a single shot. As a result, the image scanning time is advantageously reduced. The first aspect of the present invention is to provide the first method for tomographically imaging a sample with UV excitation to yield a fluorescence 3D volume. The first method is used for a Deep-TRUST system. It is related to imaging the sample at a relatively low resolution and then transforming LR TRUST images into HR TRUST images by utilizing a SR neural network which can enhance the resolution of the input image and thus reduce the image scanning time. The first method comprises: (a) block-face imaging of the exposed surface layer of a tissue block, which is immersed into staining solutions and irradiated with UV light to yield LR fluorescence and autofluorescence images; (b) using a cGAN to transform LR TRUST images into HR TRUST images, wherein the cGAN can also be replaced with other SR neural networks configured and trained to enhance the resolution of the input image, thereby reducing the time required in image scanning in comparison to directly obtaining HR TRUST images; (c) removing the imaged surface layer of the tissue block with mechanical sectioning and exposing the following/adjacent layer; and (d) multiple repetitions of the steps above (a-c) to acquire the whole 3D volume of the imaged sample. The SR neural network can be SRGAN, ESRGAN, CAR, or another kind of SR deep learning network. Preferably, the ESRGA