Search

EP-4735911-A1 - SPATIAL COMPOUNDING IN ULTRASOUND IMAGING

EP4735911A1EP 4735911 A1EP4735911 A1EP 4735911A1EP-4735911-A1

Abstract

A method for medical image compounding in which a compounding method is adaptively varied at different regions of the image frame, as a function of pixel values at or around each pixel location across a received set of image planes to be compounded. For each of at least a subset of pixels in the compounded image, one of a set of different possible compounding methods is selected, these including maximum intensity pixel selection for regions of highest average pixel value and minimum intensity pixel selection for regions of lowest average pixel value.

Inventors

  • ALVA, Ashley
  • TIERNEY, JAIME
  • LONG, Willie Jie
  • PRATER, DAVID
  • RAFTER, Patrick Gabriels

Assignees

  • Koninklijke Philips N.V.

Dates

Publication Date
20260506
Application Date
20240624

Claims (15)

  1. 1. A method for spatial compounding in ultrasound imaging, comprising: receiving (12) a set of two or more ultrasound image planes (62, 64) representing planes through an imaged object extending in different respective look directions; generating (14) an average image (66) from the received set of two or more image planes; generating (16) a compounded image (74) from the set of two or more ultrasound image planes, wherein each pixel of the compounded image is generated by compounding corresponding pixels in each of the set of two or more received ultrasound image planes; wherein the generating the compounded image comprises, for each individual pixel of the compounded image, selecting (22) a compounding method from a pre-defined set of compounding methods, wherein the set includes at least: selecting a maximum of the pixel values for the pixel location from among the set of received image planes; and selecting a minimum of the pixel values for the pixel location from among the set of received image planes; wherein, for each of at least a subset of the pixels of the compounded image, the selecting of the compounding method for each pixel includes a step of comparing an average pixel value at the corresponding pixel location, or average pixels values in a defined neighborhood thereof, in the average image against at least one threshold (68), and wherein, for the at least subset of the pixels, the selection is such that, at least pixels of the at least subset falling within lowest intensity pixel regions in the average image are compounded using the minimum pixel value compounding method, and at least pixels of the at least subset in highest intensity pixel regions of the average image are compounded using the maximum pixel value selection method.
  2. 2. The method of claim 1, wherein the at least one threshold is an adaptive threshold function (68) representing threshold values which vary as a function of depth in the average image and/or as a function of lateral/azimuthal steering angle at which a pixel in the average image was acquired.
  3. 3. The method of claim 1 or 2, wherein the at least one threshold is computed based on pixel values in the average image.
  4. 4. The method of claim 2 and claim 3, wherein the computing the threshold function comprises: deriving a probability distribution of pixel intensities in the average image; defining the thresholding function as a function which decays from a starting maximum value, wherein the starting maximum value is based on a pixel value at a pre-defined percentile of the probability distribution, e.g. 90 th percentile.
  5. 5. The method of any preceding claim, wherein the selecting of the compounding method for each of the at least subset of pixels comprises: forming a binary mask (70) by applying the at least one threshold to pixels of the average image, wherein if the average pixel intensity is above the at least one threshold, the mask pixel is assigned a value of 1 in the binary mask and if the intensity is below the threshold, the mask pixel is assigned a value of 0, or vice versa; and wherein selecting of the compounding method for each of the at least subset of pixels is performed based on values of the mask.
  6. 6. The method of claim 5, wherein if the mask has a value of 1, the maximum intensity compounding method is used, and if the value of the mask is 0, the minimum intensity compounding method is used.
  7. 7. The method of claim 5, wherein the method further comprises applying an edge-preserving smoothing filter to the binary mask to convert the binary mask to a non-binary mask (72) before selecting the compounding method for each pixel; wherein the defined set of compounding methods further includes: using a mean of the pixel values of the set of received image planes at the pixel location, and wherein, if the value of the non-binary mask at a pixel location is 1, the maximum intensity compounding method is used, if the value of the non-binary mask at a pixel location is 0, the minimum intensity compounding method is used, if the value of the mask at a pixel location is between 0 and 1, a weighted combination of the set of compounding methods is used.
  8. 8. The method of any preceding claim, wherein the at least one threshold is determined based on application of a trained machine learning algorithm to the received set of two or more image planes.
  9. 9. The method of any of claims 5-7, wherein the computing of the binary and/or non-binary mask comprises application of a trained machine learning algorithm to the received set of two or more image planes.
  10. 10. The method of any preceding claim, wherein the method is a method for elevational spatial compounding in ultrasound imaging, and wherein the set of two or more ultrasound image planes represent planes through an imaged object extending in different respective look directions in the elevation direction, and preferably wherein the received set of two or more image planes includes one image plane extending at a look angle of 0° in the elevation direction.
  11. 11. The method of any preceding claim, wherein said at least subset of pixels is determined based at least in part on a variance or spread of pixel values across the image planes at each particular pixel location or a defined neighborhood thereof, and preferably wherein the at least subset of pixels excludes pixels for which a variance or spread of pixel values across the image planes at the pixel location or a defined neighborhood thereof is above a threshold.
  12. 12. The method of any preceding claim, wherein the method comprises: receiving ultrasound data of the imaged object generated by an array transducer, the data spanning a volumetric region having an elevation direction, an axial direction and a lateral/azimuthal direction, performing elevational parallel receive beamforming to generate data corresponding to a set of two or more planes extending at different respective look directions in the elevation direction, processing the generated data for the two or more planes to derive respective B-mode images corresponding to image planes extending in each of the two or more look-directions.
  13. 13. A computer program product comprising computer program code configured, when executed by a processor, to cause the processor to perform a method in accordance with any of claims 1- 12.
  14. 14. A processing device (32), comprising one or more processors (36), configured to: receive a set of two or more ultrasound image planes representing planes through an imaged object extending in different respective look directions in the elevation direction; generate an average image from the received set of two or more image planes; generate a compounded image from the set of two or more ultrasound image planes, wherein each pixel of the compounded image is generated by compounding corresponding pixels in each of the set of two or more received ultrasound image planes. wherein the generating the compounded image comprises, for each individual pixel of the compounded image, selecting a compounding method from a pre-defined set of compounding methods, wherein the set includes at least: selecting a maximum of the pixel value for the pixel location from among the set of received image planes; and selecting a minimum of the pixel value for the pixel location from among the set of received image planes; wherein, for each of at least a subset of the pixels of the compounded image, the selecting of the compounding method for each pixel includes a step of comparing an average pixel value at the corresponding pixel location, or average pixel values in a defined neighborhood thereof, in the average image against at least one threshold, and wherein, for the at least subset of pixels, the selection is such that at least pixels of the at least subset falling within lowest intensity pixel regions in the average image are compounded using the minimum pixel value compounding method, and at least pixels of the at least subset in the highest intensity regions of the average image are compounded using the maximum pixel value selection method.
  15. 15. A system (30) comprising: an ultrasound imaging apparatus (42); and the processing device (32) of claim 14, operatively coupled with the ultrasound imaging apparatus, and adapted for receiving ultrasound data (44) generated by the ultrasound imaging apparatus.

Description

SPATIAL COMPOUNDING IN ULTRASOUND IMAGING FIELD OF THE INVENTION The invention relates to a method of spatial compounding in ultrasound imaging. BACKGROUND OF THE INVENTION Spatial compounding is an established method in ultrasound imaging. This technique aims to reduce speckle noise and improve image contrast. Spatial compounding operates through the incoherent combination of two or more transmit or receive events at differing look directions for the same object. One type of spatial compounding is elevational spatial compounding. Here, multiple image planes extending in different respective angles in the elevation direction are compounded to form a compound image. In particular, in ultrasound imaging, the elevation direction refers to the "out of plane" dimension of the ultrasound beam. The dimensions of an ultrasound beam can be described in terms of three directions: Azimuthal (lateral) direction: This is the side-to-side width of the beam. This is also known as the transverse or x-axis direction; Axial (range or depth) direction: This is the direction of propagation of the beam into the tissue, from the transducer surface downward. It is also known as the y-axis direction. Elevation (slice thickness or out-of-plane) direction: This is the front-to-back thickness of the beam. It is perpendicular to the azimuthal and axial directions, and in 2D imaging, it is the direction that is typically not displayed (i.e. in a B-mode image, it is the direction into or out of the image plane). It may also be referred to as the z-axis direction. Fig. 1 (left) shows a schematic drawing of a front view of an ultrasound transducer 4, with an ultrasound beam 6 propagating from a transducer array at its lower face. The axial (y) and azimuthal (x) directions are visible in this view. Fig. 1 (right) shows a schematic drawing of a side view of the beam 6. This shows the axial (y) and elevation (z) dimensions of the beam. In 2D ultrasound, images are often represented in the azimuthal and axial directions, with the elevation direction contributing to slice thickness but not displayed as part of the image. In 3D ultrasound, volumetric images are acquire spanning all three dimensions. For elevational spatial compounding, multiple imaging planes are captured, each extending in a differing look-direction in the elevation direction. By this is meant that the planes are angled at different orientations in the elevation direction. This is schematically illustrated in Fig. 1 (right) which shows two planes 8a, 8b, the first plane 8a extending at -1° in the elevation direction, and the second plane 8b extending at +1° in the elevation direction. For imaging employing a 2D array, variable look directions can be achieved by executing elevational parallel receive beamforming at different angles. The resulting B-mode images are then compounded post-log compression. This approach is beneficial due to its requirement of only a singular transmit event in elevation, thereby maintaining frame rates and preventing potential decorrelation of structures in the case of movement of an imaged object (e.g. heart motion). One example elevational spatial compounding technique employs elevation compounding across two elevation planes at -1° and 1°. Compounding is achieved by taking the maximum value across planes at each pixel. Fig. 2 (right) shows an example compounded image formed from compounding of two elevation planes at -1° and 1°. For comparison, Fig. 2 (left) shows an image of the same structure formed without compounding. This image was acquired at an on-axis 0° elevation plane. The conventional elevation compounding approach has proven to be an effective solution in improving speckle appearance and, in cardiac imaging, reducing "drop-out". However, while utilizing the maximum value at all pixels aids in smoothing speckle appearance, it also amplifies undesired signal, noise, and clutter within the image. For instance, the contrast of off-axis anechoic structures (such as blood vessels) diminishes when taking the maximum value. In particular, such structures should display as dark (low intensity reflection) within an image. However, the use of maximum intensity compounding results in these structures appearing washed out and reduced in contrast. This is illustrated in Fig. 3. Fig. 3 (top row) shows single image planes of an anechoic vessel in a phantom, each acquired at a different directional angle in the elevation direction. Fig. 3 (top left) shows an image plane oriented at -1° in the elevation direction. Fig. 3 (top middle) shows an image plane oriented at 0° in the elevation direction (i.e. the axial plane). Fig. 3 (top right) shows an image plane oriented at +1° in the elevation direction. For these image planes, no compounding is performed. Fig. 3 (bottom) shows compounded images, each formed by compounding of the three image planes on the top row of Fig. 3 by a different respective compounding approach. Fig. 3 (bottom left) shows a compo