Search

CN-115345933-B - Method for predicting relative position between images based on inertial measurement data

CN115345933BCN 115345933 BCN115345933 BCN 115345933BCN-115345933-B

Abstract

The invention discloses an inter-image relative position prediction method based on inertial measurement data, which comprises the steps of obtaining a plurality of ultrasonic images and inertial measurement unit data corresponding to the ultrasonic images respectively, wherein the ultrasonic images are located in the same image sequence, determining direction information and acceleration information corresponding to the ultrasonic images respectively according to the inertial measurement unit data corresponding to the ultrasonic images respectively, and determining relative position information corresponding to the ultrasonic images according to the ultrasonic images and the direction information and the acceleration information corresponding to the ultrasonic images respectively, wherein the relative position information is used for reflecting relative position transformation between two adjacent ultrasonic images. The invention predicts the relative position information between images by combining the inertial measurement unit data and the image sequence, and solves the problem that the existing deep learning technology only depends on ultrasonic images to estimate the relative position and is easily influenced by the displacement between images and accumulated drift errors.

Inventors

  • NI DONG
  • LUO MINGYUAN
  • YANG XIN
  • WANG HONGZHANG
  • DU LIWEI

Assignees

  • 深圳大学

Dates

Publication Date
20260508
Application Date
20220630

Claims (8)

  1. 1. A method of predicting relative position between images based on inertial measurement data, the method comprising: Acquiring a plurality of ultrasonic images and inertial measurement unit data corresponding to the ultrasonic images respectively, wherein the ultrasonic images are positioned in the same image sequence; Determining direction information and acceleration information corresponding to the ultrasonic images respectively according to the inertial measurement unit data corresponding to the ultrasonic images respectively, wherein the direction information is Euler angle information, and each inertial measurement unit data comprises Euler angle information and acceleration information; for each ultrasonic image, calculating a rotation matrix according to the measured Euler angle information corresponding to the ultrasonic image, and determining the Euler angle information corresponding to the ultrasonic image, wherein the measured Euler angle information corresponds to a northeast coordinate system, and the Euler angle information corresponds to a coordinate system of the ultrasonic image; determining a gravity direction according to the measured Euler angle information corresponding to the ultrasonic image, and determining the acceleration information corresponding to the ultrasonic image according to the measured acceleration information corresponding to the ultrasonic image and the gravity direction, wherein the acceleration information is determined based on the difference between the measured acceleration information and the components of the measured acceleration information in the gravity direction; Determining relative position information corresponding to each ultrasonic image according to the ultrasonic images and direction information and acceleration information corresponding to each ultrasonic image, wherein the relative position information is used for reflecting relative position transformation between two adjacent ultrasonic images, and comprises the steps of inputting the Euler angle information and the acceleration information corresponding to each ultrasonic image and each ultrasonic image into a target network which is trained in advance, obtaining the relative position information through the target network, inputting the acceleration information and the direction information corresponding to each ultrasonic image and each ultrasonic image into the characteristic extraction module, obtaining image characteristics, acceleration characteristics and Euler angle characteristics corresponding to each ultrasonic image respectively, wherein the image characteristics corresponding to each ultrasonic image are used for reflecting relative distance information between the two adjacent ultrasonic images, inputting the image characteristics and the acceleration characteristics corresponding to each ultrasonic image into the fusion module to obtain speed characteristics corresponding to each ultrasonic image respectively, inputting the ultrasonic image characteristics corresponding to each ultrasonic image into the fusion module, obtaining relative position information, and the relative position information corresponding to each ultrasonic image comprises a plurality of relative angular parameters, and a plurality of relative angular transformation parameters, and the relative position information comprises a plurality of relative angular transformation parameters, wherein the relative angular transformation parameters and the relative angular transformation parameters are obtained by inputting the relative position information and the relative angular transformation parameters, the characteristic extraction module comprises a residual error network, a first full-connection layer and a second full-connection layer, wherein the input of the residual error network is an ultrasonic image, the output of the residual error network is an image characteristic corresponding to the ultrasonic image, the input of the first full-connection layer is acceleration information corresponding to the ultrasonic image, the output of the first full-connection layer is an acceleration characteristic corresponding to the ultrasonic image, the input of the second full-connection layer is Euler angle information corresponding to the ultrasonic image, the output of the second full-connection layer is an Euler angle characteristic corresponding to the ultrasonic image, the fusion module comprises a fusion unit and a characteristic enhancement network, and the characteristic enhancement network and the prediction network are constructed by long-term and short-term memory networks.
  2. 2. The method of claim 1, wherein each of the ultrasound images is acquired based on a predetermined ultrasound imaging device, and the inertial measurement unit data corresponding to each of the ultrasound images is acquired based on a predetermined inertial measurement unit located on the ultrasound imaging device.
  3. 3. The method for predicting the relative position between images based on inertial measurement data according to claim 1, wherein the fusion module includes a fusion unit and a feature enhancement network, the inputting the image features and the acceleration features, which are respectively corresponding to the ultrasound images, into the fusion module, and obtaining the velocity features, which are respectively corresponding to the ultrasound images, includes: Inputting the image features and the acceleration features corresponding to the ultrasonic images into the fusion unit to obtain fusion speed features corresponding to the ultrasonic images; And inputting the fusion speed characteristics corresponding to the ultrasonic images into the characteristic enhancement network to obtain the speed characteristics corresponding to the ultrasonic images.
  4. 4. The method of claim 1, wherein the training process of the target network comprises: Acquiring a training image sequence and the direction information and the acceleration information corresponding to each training image frame in the training image sequence, and inputting the direction information and the acceleration information corresponding to each training image frame in the training image sequence and the training image sequence into an initial network to obtain predicted relative position information corresponding to the training image, wherein the initial network is the target network which is not trained; acquiring standard relative position information corresponding to the training image sequence, and determining a first loss value corresponding to the initial network according to the predicted relative position information and the standard relative position information; And updating network parameters of the initial network according to the first loss value to obtain an updated network, judging whether the updated network converges to a target value, and if not, continuing to execute the steps of acquiring the training image sequence and the direction information and the acceleration information corresponding to each training image frame in the training image sequence until the obtained updated network converges to the target value to obtain the trained target network.
  5. 5. The method of inertial measurement data based inter-image relative position prediction of claim 1, further comprising: according to the relative position information, determining predicted acceleration information and predicted Euler angle information respectively corresponding to each ultrasonic image; Determining an acceleration loss value corresponding to the target network according to the acceleration information and the predicted acceleration information respectively corresponding to each ultrasonic image; determining Euler angle loss values corresponding to the target network according to Euler angle information and predicted Euler angle information corresponding to each ultrasonic image respectively; Determining a second loss value corresponding to the target network according to the acceleration loss value and the Euler angle loss value; and updating network parameters of the target network according to the second loss value.
  6. 6. An inter-image relative position prediction apparatus based on inertial measurement data, the apparatus comprising: The data acquisition module is used for acquiring a plurality of ultrasonic images and inertial measurement unit data corresponding to the ultrasonic images respectively, wherein the ultrasonic images are positioned in the same image sequence; The data conversion module is used for determining direction information and acceleration information corresponding to each ultrasonic image according to the inertial measurement unit data corresponding to each ultrasonic image respectively, and comprises that the direction information is Euler angle information, and each inertial measurement unit data comprises Euler angle information and acceleration information; for each ultrasonic image, calculating a rotation matrix according to the measured Euler angle information corresponding to the ultrasonic image, and determining the Euler angle information corresponding to the ultrasonic image, wherein the measured Euler angle information corresponds to a northeast coordinate system, and the Euler angle information corresponds to a coordinate system of the ultrasonic image; determining a gravity direction according to the measured Euler angle information corresponding to the ultrasonic image, and determining the acceleration information corresponding to the ultrasonic image according to the measured acceleration information corresponding to the ultrasonic image and the gravity direction, wherein the acceleration information is determined based on the difference between the measured acceleration information and the components of the measured acceleration information in the gravity direction; The information prediction module is used for determining relative position information corresponding to each ultrasonic image according to each ultrasonic image and direction information and acceleration information corresponding to each ultrasonic image, wherein the relative position information is used for reflecting relative position transformation between two adjacent ultrasonic images, the information prediction module comprises the steps of inputting the Euler angle information and the acceleration information corresponding to each ultrasonic image and each ultrasonic image into a target network which is trained in advance, obtaining the relative position information through the target network, the target network comprises a characteristic extraction module, a fusion module and a prediction network, inputting the acceleration information and the direction information corresponding to each ultrasonic image and each ultrasonic image into the characteristic extraction module, obtaining image characteristics, acceleration characteristics and Euler angle characteristics corresponding to each ultrasonic image, wherein the image characteristics corresponding to each ultrasonic image are used for reflecting relative distance information between the two ultrasonic images, inputting the image characteristics and the acceleration characteristics corresponding to each ultrasonic image into the fusion module to obtain the relative position information, inputting the ultrasonic image characteristics corresponding to each ultrasonic image respectively, inputting the ultrasonic image characteristics and the relative position information into the relative position conversion module, and the relative position conversion module comprises a plurality of relative position conversion parameters, and relative position conversion parameters, wherein the relative position conversion parameters comprise relative position conversion parameters are different from the relative position information, the characteristic extraction module comprises a residual error network, a first full-connection layer and a second full-connection layer, wherein the input of the residual error network is an ultrasonic image, the output of the residual error network is an image characteristic corresponding to the ultrasonic image, the input of the first full-connection layer is acceleration information corresponding to the ultrasonic image, the output of the first full-connection layer is an acceleration characteristic corresponding to the ultrasonic image, the input of the second full-connection layer is Euler angle information corresponding to the ultrasonic image, the output of the second full-connection layer is an Euler angle characteristic corresponding to the ultrasonic image, the fusion module comprises a fusion unit and a characteristic enhancement network, and the characteristic enhancement network and the prediction network are constructed by long-term and short-term memory networks.
  7. 7. A terminal comprising a memory and one or more processors, the memory storing one or more programs, the programs comprising instructions for performing the inter-image relative position prediction method based on inertial measurement data of any of claims 1-5, the processors being configured to execute the programs.
  8. 8. A computer readable storage medium having stored thereon a plurality of instructions adapted to be loaded and executed by a processor to implement the steps of the method for predicting inter-image relative position based on inertial measurement data according to any one of claims 1 to 5.

Description

Method for predicting relative position between images based on inertial measurement data Technical Field The invention relates to the field of ultrasonic images, in particular to a method for predicting the relative position between images based on inertial measurement data. Background Ultrasound imaging has become one of the main clinical diagnostic tools for its safety, portability, low cost, and other advantages. Three-dimensional ultrasound is widely used because of intuitive display, easy interaction and rich clinical information. Three-dimensional ultrasound has three acquisition modes, namely mechanical probes, electronic phased arrays and free acquisition. Free acquisition shows the advantage of being flexible and convenient compared to mechanical probes or electronic phased arrays, which are limited by the field of view and operability, by computing the relative position of a series of ultrasound images to reconstruct the ultrasound volume. For free three-dimensional ultrasound reconstruction, early reconstruction schemes relied primarily on external positioning systems, such as electromagnetic or optical positioning, to provide accurate estimates of ultrasound image position through complex, expensive and susceptible external sensors. The scheme independent of external positioning is mainly speckle decorrelation, which uses the correlation of speckle between adjacent ultrasound images to estimate relative motion and decomposes the relative motion into two parts, i.e. an intra-image part and an extra-image part, but the reconstruction quality is susceptible to the scanning rate and the angle. Currently, mainly using a scheme based on a deep learning technology, an ultrasonic image is input into a deep learning model to estimate the relative position of the ultrasonic image, and finally three-dimensional reconstruction is performed. For example, analogy to the similarity of a deep learning model and a traditional speckle decorrelation method is adopted to carry out ultrasonic image relative motion estimation, the attention mechanism is utilized to mine correlation information of a plurality of frames of ultrasonic images for three-dimensional reconstruction, and consistency constraint and shape prior are utilized to mine clues inherent in an ultrasonic image sequence so as to improve reconstruction performance. However, current deep learning techniques rely solely on ultrasound images to estimate relative position, and are susceptible to inter-image displacement and accumulated drift errors. Accordingly, there is a need for improvement and development in the art. Disclosure of Invention The invention aims to solve the technical problems that aiming at the defects in the prior art, an inter-image relative position prediction method based on inertial measurement data is provided, and aims to solve the problems that the existing deep learning technology only depends on ultrasonic images to estimate the relative position and is easily influenced by inter-image displacement and accumulated drift errors. The technical scheme adopted by the invention for solving the problems is as follows: In a first aspect, an embodiment of the present invention provides a method for predicting a relative position between images based on inertial measurement data, where the method includes: Acquiring a plurality of ultrasonic images and inertial measurement unit data corresponding to the ultrasonic images respectively, wherein the ultrasonic images are positioned in the same image sequence; According to the inertial measurement unit data corresponding to each ultrasonic image, determining direction information and acceleration information corresponding to each ultrasonic image; And determining relative position information corresponding to each ultrasonic image according to each ultrasonic image and direction information and acceleration information corresponding to each ultrasonic image, wherein the relative position information is used for reflecting relative position transformation between two adjacent ultrasonic images. In a second aspect, an embodiment of the present invention provides an inter-image relative position prediction apparatus based on inertial measurement data, where the apparatus includes: The data acquisition module is used for acquiring a plurality of ultrasonic images and inertial measurement unit data corresponding to the ultrasonic images respectively, wherein the ultrasonic images are positioned in the same image sequence; The data conversion module is used for determining the direction information and the acceleration information corresponding to each ultrasonic image according to the inertial measurement unit data corresponding to each ultrasonic image; The information prediction module is used for determining relative position information corresponding to each ultrasonic image according to each ultrasonic image and direction information and acceleration information corresponding to each ultras