Search

CN-116310032-B - Three-dimensional reconstruction method of two-dimensional ultrasonic image based on deep learning

CN116310032BCN 116310032 BCN116310032 BCN 116310032BCN-116310032-B

Abstract

The invention belongs to the technical field of medical image processing, and particularly relates to a three-dimensional reconstruction method of a two-dimensional ultrasonic image based on deep learning. The invention provides a novel depth learning-based three-dimensional reconstruction method of a two-dimensional ultrasonic image, which takes a constructed high-frame-rate two-dimensional ultrasonic image sequence as input data of a 3DCNN-LSTM network, uses the 3DCNN to extract B ultrasonic image frame characteristics to obtain space pose information, uses LSTM to carry out statistics and prediction on the space pose information sequence, effectively utilizes the space-time correlation of the inter-frame characteristics and the pose information sequence of the ultrasonic image, eliminates accumulated errors, and improves the accuracy of three-dimensional reconstruction.

Inventors

  • CHEN XIN
  • LI YANFENG
  • CHEN HOUJIN
  • PENG YAHUI
  • LI JUPENG

Assignees

  • 北京交通大学

Dates

Publication Date
20260508
Application Date
20230320

Claims (8)

  1. 1. The three-dimensional reconstruction method of the two-dimensional ultrasonic image based on the deep learning is characterized by comprising the following steps of: s1, extracting an interested region of a two-dimensional ultrasonic image sequence; S2, processing the region of interest into three-dimensional data by stacking a plurality of continuous frames to serve as input of a trained three-dimensional convolutional neural network; s3, extracting inter-frame features from the two-dimensional image sequence through different three-dimensional convolution kernels by the three-dimensional convolution neural network, and calculating to obtain space pose information; s4, extracting the dependency relationship of the space pose information sequence through the trained long-short-term memory model, carrying out statistics and prediction on the output of the three-dimensional convolutional neural network, outputting pose information of the two-dimensional ultrasonic image sequence in space, and carrying out three-dimensional reconstruction to generate a three-dimensional reconstructed image; step S3 includes the steps of: S31, copying 6 parts of cubes by a three-dimensional convolution neural network, wherein each 3 parts are 1 feature group, S32, carrying out three-dimensional convolution on an input two-dimensional image sequence of two feature groups, and extracting inter-frame deep features; S33, carrying out maximum pool processing on each layer of convolution layer, and respectively reducing the size of the feature map; S34, calculating to obtain space pose information through the full link layer; The method for calculating the spatial pose information in step S34 is as follows: s341, calculating speckle decorrelation of three groups of interested areas of adjacent frames at different distances, measuring for a plurality of times, drawing a distance-decorrelation calibration curve by taking an average value, and calculating the expression of the speckle decorrelation as follows: Wherein cov (X, Y) is the covariance of the ROI areas X and Y corresponding to the adjacent frames, σ X and σ Y are the standard deviations of the ROI areas X and Y; S342, moving the frame images along the X, Y and Z directions with a fixed step length to obtain three distance-decorrelation calibration curves in the three directions, S343, calculating speckle decorrelation of three groups of interested areas of the current two adjacent frame images; And S344, obtaining the distance between the current adjacent images from the calibration curve through a table look-up method, and calculating to obtain the spatial position and the angular relation of the current image relative to the previous frame image according to the principle that three non-collinear points in the space determine a unique plane in the space, namely calculating to obtain the pose information of the next frame image according to the three distances and the pose information of the previous frame image.
  2. 2. The method for three-dimensional reconstruction of a two-dimensional ultrasound image based on deep learning as set forth in claim 1, wherein the step S1 includes the steps of: s11, acquiring a low-frame-rate two-dimensional ultrasonic image acquired by a one-dimensional array ultrasonic probe and pose information corresponding to the two-dimensional ultrasonic image acquired by an acousto-optic positioning system; s12, interpolating and fitting the image sequence through a Bezier curve function, adding two-dimensional ultrasonic images at the insertion points, and constructing a two-dimensional ultrasonic image sequence with high frame rate, wherein the Bezier curve function has the following expression: Wherein, n+1 control points are totally arranged on the Bezier curve B (t), P 0 , P 1 , … … , P n is respectively arranged on the Bezier curve B (t), and t is a curve function; And S13, extracting a region of interest (ROI) of each frame in the two-dimensional ultrasonic image sequence with high frame rate.
  3. 3. The three-dimensional reconstruction method of two-dimensional ultrasound images based on depth learning according to claim 2, wherein in step S12, 4 frames are inserted between every two frames of the two-dimensional ultrasound image sequence with a low frame rate, so as to obtain the two-dimensional ultrasound image sequence with a high frame rate.
  4. 4. The method for three-dimensional reconstruction of two-dimensional ultrasound images based on depth learning according to claim 1, wherein in step S2, 4 insertion frames are inserted into each two consecutive frames, three-dimensional data is obtained after ROI extraction processing is performed on each consecutive frame and the insertion frame, and a cube composed of ROIs of a plurality of frames of ultrasound images is used as an input of a three-dimensional convolutional neural network.
  5. 5. The method for three-dimensional reconstruction of two-dimensional ultrasound images based on deep learning according to claim 1, wherein the training of the three-dimensional convolutional neural network in step S2 and the training of the long-term memory model in step S4 include: Initializing a learning rate, and comparing the spatial pose information extracted by the three-dimensional convolutional neural network with a pose information tag to obtain a pose information loss mean square error MSE, wherein the MSE has the following expression: Where n is the number of samples, R i is the true value, and E i is the predicted value; And feeding back the pose information loss mean square error MSE and a characteristic sequence output by the three-dimensional convolutional neural network to a long-short-term memory model, wherein the long-short-term memory model continuously updates parameters to reduce the loss MSE of the output pose information.
  6. 6. The method for three-dimensional reconstruction of two-dimensional ultrasound images based on deep learning according to claim 1, wherein the long-short term memory model in step S4 includes a forgetting gate for determining discard information in the model state at the previous time and updating the model state, and an input gate and an output gate, and the model state at the previous time is taken as a parameter for updating the current state.
  7. 7. The method for three-dimensional reconstruction of two-dimensional ultrasound images based on deep learning according to claim 6, wherein the step S4 uses a multi-layer long-short-term memory model in combination with a predetermined time feature to form new time-series data.
  8. 8. The method for three-dimensional reconstruction of two-dimensional ultrasound images based on depth learning according to claim 7, wherein the three-dimensional reconstruction in step S4 is divided into a pixel map and a gap fill, and each pixel of each frame of two-dimensional ultrasound image is transformed into a three-dimensional reconstruction coordinate system by coordinate transformation according to spatial pose information of the current frame at the time of the pixel map.

Description

Three-dimensional reconstruction method of two-dimensional ultrasonic image based on deep learning Technical Field The invention belongs to the technical field of medical image processing, and particularly relates to a three-dimensional reconstruction method of a two-dimensional ultrasonic image based on deep learning. Background Compared with other technologies, the ultrasonic medicine has obvious advantages, has the characteristics of innocuity, painless, visual use and the like because of being non-invasive, is one of the image diagnosis methods indispensable for clinical medicine, and has been widely applied in clinical medicine. Conventional B-mode ultrasound imaging systems can only provide two-dimensional sequential images of a scanned object, and doctors can only reconstruct these tomographic images into three-dimensional objects through the human brain, which requires considerable experience and space imagination from the doctor. The three-dimensional ultrasonic imaging has the advantages of visual image display, capability of accurately measuring medical diagnosis parameters and wide application in medical teaching and operation planning, and at present, four main methods for acquiring medical ultrasonic three-dimensional images are 1) a special two-dimensional array ultrasonic probe with high price, but the imaging visual field of the method is narrow, 2) a one-dimensional array ultrasonic probe driven by machinery is used for scanning according to a preset track, a fixed number of two-dimensional ultrasonic images are acquired for three-dimensional reconstruction, the mechanical device of the method is heavy, 3) a one-dimensional array ultrasonic probe is used for acquiring two-dimensional ultrasonic images, pose information corresponding to the two-dimensional ultrasonic images is acquired by an additional space and an angle sensor, and then three-dimensional reconstruction is carried out, but accumulated errors are easy to generate, 4) a one-dimensional array ultrasonic probe is not used, and only the relative spatial position between adjacent images is estimated according to the self information of the ultrasonic images, but the method has high frame rate requirement on a two-dimensional image sequence, and a large number of training and calibration processes are required. Disclosure of Invention Aiming at the problems, the invention provides a novel three-dimensional reconstruction method of a two-dimensional ultrasonic image based on deep learning. The specific technical scheme of the invention is as follows: the invention provides a three-dimensional reconstruction method of a two-dimensional ultrasonic image based on deep learning, which comprises the following steps: s1, extracting an interested region of a two-dimensional ultrasonic image sequence; S2, processing the region of interest into three-dimensional data by stacking a plurality of continuous frames to serve as input of a trained three-dimensional convolutional neural network; s3, extracting inter-frame features from a two-dimensional image sequence by the three-dimensional convolutional neural network through different three-dimensional convolutional kernels and calculating to obtain space pose information; And S4, extracting the dependency relationship of the space pose information sequence through the trained long-short-period memory model, carrying out statistics and prediction on the output of the three-dimensional convolutional neural network, outputting pose information of the two-dimensional ultrasonic image sequence in space, and carrying out three-dimensional reconstruction to generate a three-dimensional reconstructed image. The beneficial effects obtained by the invention are as follows: The invention provides a novel depth learning-based three-dimensional reconstruction method of a two-dimensional ultrasonic image, which takes a constructed high-frame-rate two-dimensional ultrasonic image sequence as input data of a 3DCNN-LSTM network, uses the 3DCNN to extract B ultrasonic image frame characteristics to obtain space pose information, uses LSTM to carry out statistics and prediction on the space pose information sequence, effectively utilizes the space-time correlation of the inter-frame characteristics and the pose information sequence of the ultrasonic image, eliminates accumulated errors, and improves the accuracy of three-dimensional reconstruction. Drawings FIG. 1 is a flow chart of a three-dimensional reconstruction method of a two-dimensional ultrasound image based on depth learning in the present invention; FIG. 2 is a flow chart of step S1 in the present invention; FIG. 3 is a schematic illustration of interpolation and fitting by Bezier curve function within the same control window in the present invention; FIG. 4 is a schematic diagram of a 3DCNN network in accordance with the present invention; FIG. 5is a schematic diagram of a 3DCNN-LSTM network in accordance with the present invention; FIG. 6 is a flow