Search

CN-122018128-A - Wide-field multi-photon microscopic imaging method, device and equipment

CN122018128ACN 122018128 ACN122018128 ACN 122018128ACN-122018128-A

Abstract

The application relates to the technical field of optical microscopic imaging, in particular to a wide-field multi-photon microscopic imaging method, a device and equipment, wherein the method comprises the steps of acquiring training data acquired by a multi-photon microscope according to a rolling sub-sampling scanning strategy, wherein the strategy covers the whole field of view by gradually and longitudinally moving a periodic scanning path with high frame rate and low space sampling rate. And carrying out pretreatment such as normalization, time sequence registration or space division on the data, generating a mask matrix, and constructing a self-supervision training sample and a pseudo tag by using the complementary region for cross supervision. Training a three-dimensional convolutional neural network based on the training sample and the pseudo tag, and performing reasoning reconstruction on rolling sub-sampling data by using the trained network to obtain a target image sequence. Therefore, the problems that the high spatial sampling density in the related technology leads to low frame rate, and the high frame rate needs a small imaging area or low spatial sampling rate, namely the imaging field of view and the imaging space-time resolution are difficult to be simultaneously compatible are solved.

Inventors

  • KONG LINGJIE
  • LIANG JUNHAO
  • YAN HONGWEI
  • ZHONG YI
  • SHI SONGHAI

Assignees

  • 清华大学

Dates

Publication Date
20260512
Application Date
20251204

Claims (10)

  1. 1. A wide field multi-photon microscopy imaging method, comprising the steps of: acquiring training data acquired by a multiphoton microscope according to a rolling sub-sampling scanning strategy, wherein the rolling sub-sampling scanning strategy comprises the steps that in a plurality of periods of rolling sub-sampling scanning of the multiphoton microscope along a target direction, scanning paths of image frames of adjacent periods are staggered in sequence in the target direction, and the scanning paths of the image frames of all the periods cover all points of the whole field of view; At least one of normalization, time sequence registration and space division is carried out on the training data, a mask matrix is generated according to the preprocessed data, a self-supervision training sample is formed by cross supervision of complementary areas of the mask matrix, and a pseudo tag is constructed based on a space complementary relation of image frames; Training a pre-constructed three-dimensional convolutional neural network according to the self-supervision training sample and the pseudo tag, and reasoning and reconstructing rolling sub-sampling data by using the trained three-dimensional convolutional neural network to obtain an image sequence, wherein a multi-photon microscope is controlled to acquire the rolling sub-sampling data according to a rolling sub-sampling scanning strategy.
  2. 2. The wide-field multiphoton microscopy imaging method of claim 1, wherein the training data and the sub-sampling data each comprise scan path prior information, spatial location topography information, and neural activity firing time information during the multiphoton microscopy scanning.
  3. 3. The wide-field multiphoton microscopic imaging method according to claim 1, wherein the formula of calculation of the mask matrix is: X=Y⊙M wherein X represents sub-sampled observed data, Y represents complete neuroactive imaging volume data, as well as element-wise multiplications, Representing a binary sample mask; The calculation formula of the complementary area of the mask matrix is as follows: Wherein, the For the even column index, For an odd number of column indices, A base sampling pattern of the sample is set, Frame(s) Is a pixel of (2) Whether or not it is to be sampled, For an even column mask, 。
  4. 4. A wide field multi-photon microscopy imaging method as in claim 3 wherein the complementary regions of the mask matrix satisfy a spatial complementarity condition, the spatial complementarity condition being: Wherein, the As an indicator function of the complete sample, Is an empty set.
  5. 5. The wide-field multiphoton microscopic imaging method of claim 1, wherein the reconstruction target of the three-dimensional convolutional neural network is a parameterized reconstruction function such that ; Wherein, the Is a time-averaged image that provides a global spatial context, Is the reconstructed complete volume data of the object, Representing the parameters by And (3) a controlled mapping function.
  6. 6. The wide-field multiphoton microscopic imaging method of claim 1, wherein the three-dimensional convolutional neural network includes an encoder, a decoder, and an output layer, wherein the encoder downsamples input data and extracts features stepwise, the decoder upsamples extracted features stepwise, the output layer outputs the sequence of images, the encoder and the decoder are connected in a skip manner, a first layer of the encoder The layer mathematical expression is: Wherein, the Is the first The layer encoder is used to encode the data in the layer, Is the first A layer encoder; the first decoder The layer mathematical expression is: Wherein, the Is the first Layer decoder, wherein [ [ , And represents stitching; The mathematical expression of the output layer is: Wherein, the In order to achieve a final reconstruction result, Is a single pixel convolution kernel.
  7. 7. The wide-field multiphoton microscopic imaging method of claim 6, wherein the first layer of the three-dimensional convolutional neural network is of a size of × × The convolution kernel of (a) extracts the temporal features, wherein, The time characteristic of the convolution kernel extraction is as follows: Wherein, the In order to be a space receptive field, For the scale of the convolution kernel in time, Is the first Time pooling factors for the layers; The spatial features extracted by the convolution kernel are as follows: Wherein, the For the spatial size of the convolution kernel, And (3) the space pooling factors.
  8. 8. The wide-field multi-photon microscopy imaging method of claim 6, wherein the three-dimensional convolutional neural network calculates a training loss using a total reconstruction loss function during training, and updates network parameters of the three-dimensional convolutional neural network based on the training loss, wherein the total reconstruction loss function is: Wherein, the In order to achieve a total loss of the building, For the manhattan loss and euclidean loss combination, For the left-hand domain object to be a left-hand domain object, Is the right side domain target.
  9. 9. A wide field multi-photon microscopy imaging apparatus comprising: the acquisition module is used for acquiring training data acquired by the multi-photon microscope according to a rolling sub-sampling scanning strategy, wherein the rolling sub-sampling scanning strategy comprises the steps that in a plurality of periods of rolling sub-sampling scanning of the multi-photon microscope along a target direction, scanning paths of adjacent period image frames are staggered in sequence in the target direction, and the scanning paths of all period image frames cover all points of the whole field of view; The processing module is used for carrying out at least one preprocessing of normalization, time sequence registration and space division on the training data, generating a mask matrix according to the preprocessed data, forming a self-supervision training sample for cross supervision of complementary areas of the mask matrix, and constructing a pseudo tag based on a space complementary relation of image frames; The output module is used for training the pre-constructed three-dimensional convolutional neural network according to the self-supervision training sample and the pseudo tag, reasoning and reconstructing rolling sub-sampling data by utilizing the trained three-dimensional convolutional neural network to obtain an image sequence, and controlling the multi-photon microscope to acquire the rolling sub-sampling data according to a rolling sub-sampling scanning strategy.
  10. 10. An electronic device comprising storage hardware, a processor and a computer program stored on the storage hardware and executable on the processor, the processor executing the program to implement the wide field multi-photon microscopy imaging method according to any of claims 1-8.

Description

Wide-field multi-photon microscopic imaging method, device and equipment Technical Field The application relates to the technical field of optical microscopic imaging, in particular to a wide-field multi-photon microscopic imaging method, a device and equipment. Background Multiphoton microscopic imaging is an important means for in vivo biodynamic observation, but due to physical limitations such as mechanical inertia of a galvanometer, tissue scattering, noise and the like, the frame rate is obviously reduced by improving the sampling density in the related technology, and the imaging area is often required to be reduced or the spatial resolution is sacrificed when the frame rate is improved, so that the imaging speed, the imaging field of view and the imaging quality are difficult to be simultaneously considered. Disclosure of Invention The application provides a wide-field multi-photon microscopic imaging method, a device and equipment, which are used for solving the technical difficulty of realizing wide-field multi-photon microscopic imaging with both wide-field and high space-time resolution. The embodiment of the first aspect of the application provides a wide-view-field multi-photon microscopic imaging method, which comprises the steps of obtaining training data acquired by a multi-photon microscope according to a rolling sub-sampling scanning strategy, wherein the rolling sub-sampling scanning strategy comprises the steps of sequentially staggering scanning paths of adjacent periodic image frames in a target direction in a plurality of periods of rolling sub-sampling scanning of the multi-photon microscope along the target direction, enabling the scanning paths of all periodic image frames to cover all points of the whole view field, carrying out at least one preprocessing of normalization, time sequence registration and space division on the training data, generating a mask matrix according to the preprocessed data, forming a self-supervision training sample on cross supervision of complementary areas of the mask matrix, constructing a pseudo tag based on the space complementary relation of the image frames, training a three-dimensional convolution neural network constructed in advance according to the self-supervision training sample and the pseudo tag, and carrying out reasoning and reconstruction on the rolling sub-sampling data by utilizing the trained three-dimensional convolution neural network so as to obtain an image sequence, wherein the multi-photon microscope is controlled to acquire the rolling sub-sampling data according to the rolling sub-sampling scanning strategy. Optionally, in one embodiment of the present application, the training data and the sub-sampled data each include scan path prior information, neuron spatial location morphology information, and neural activity firing time information during the multi-photon microscope scan. Optionally, in one embodiment of the present application, the mask matrix is calculated according to the formula: X=Y⊙M wherein X represents sub-sampling observation, Y represents complete neural activity imaging volume data including calcium signal imaging, nerve voltage signal imaging, nerve chemical substance signal imaging, etc., as well as element-by-element multiplication, Representing a binary sample mask; The calculation formula of the complementary area of the mask matrix is as follows: Wherein, the Representing the even column index,Representing the index of the odd columns,Basic sampling pattern (e.g. period isIs arranged in a triangular or rectangular pattern,For sub-sampling steps),Indication frameIs a pixel of (2)Whether or not it is to be sampled,Representing an even column mask,。 Optionally, in one embodiment of the application, the complementary regions of the mask matrix satisfy a spatial complementarity condition, the spatial complementarity condition being: Wherein, the An indicator function representing a complete sample,Representing an empty set. Optionally, in one embodiment of the present application, the reconstruction target of the three-dimensional convolutional neural network is a parameterized reconstruction function by which to causeThe reconstruction function is: Wherein, the Is a time-averaged image that provides a global spatial context,Is the reconstructed complete volume data of the object,Representing the parameters byAnd (3) a controlled mapping function. Optionally, in one embodiment of the present application, the three-dimensional convolutional neural network includes an encoder, a decoder, and an output layer, wherein the encoder performs progressive downsampling and feature extraction on input data, the decoder performs progressive upsampling on the extracted features, the output layer outputs the image sequence, the encoder and the decoder are connected in a skip manner, and the encoder is the first layerThe layer mathematical expression is: Wherein, the Represents the firstThe layer encoder is used to e