Search

CN-121987343-A - Ultrasonic ablation planning method and ultrasonic ablation system

CN121987343ACN 121987343 ACN121987343 ACN 121987343ACN-121987343-A

Abstract

The application discloses an ultrasonic ablation planning method, an ultrasonic ablation system, electronic equipment and a program product. The method comprises the steps of obtaining ultrasonic image data, carrying out image identification processing on the ultrasonic image data to identify a focus area, determining an area to be ablated and a protection area according to the focus area, wherein the focus area is covered by the area to be ablated, the protection area is not overlapped with the area to be ablated, and generating ultrasonic ablation planning data according to the area to be ablated and the protection area, wherein the ultrasonic ablation planning data comprises an ablation element opening configuration corresponding to the area to be ablated and an ablation element closing configuration corresponding to the protection area. According to the scheme, through automatic image recognition and region division, surrounding normal tissues can be effectively protected while the focus area is completely covered and ablated, and the accuracy and safety of ultrasonic ablation treatment are improved.

Inventors

  • SHEN PING
  • WANG ZHENGBIN
  • SUN XIAOLU

Assignees

  • 苏州谱洛医疗科技有限公司

Dates

Publication Date
20260508
Application Date
20260327

Claims (17)

  1. 1. An ultrasound ablation planning method, comprising: Acquiring ultrasonic image data; Performing image recognition processing on the ultrasonic image data to recognize a focus area; determining a to-be-ablated zone and a protection zone according to the focus zone, wherein the to-be-ablated zone covers the focus zone, and the protection zone is not overlapped with the to-be-ablated zone; and generating ultrasonic ablation planning data according to the region to be ablated and the protection region, wherein the ultrasonic ablation planning data comprises an ablation element opening configuration corresponding to the region to be ablated and an ablation element closing configuration corresponding to the protection region.
  2. 2. The method of claim 1, wherein performing image recognition processing on the ultrasound image data to identify a focal zone comprises: Inputting the ultrasonic image data into a U-Net neural network model, wherein the U-Net neural network model comprises an encoder, a bottleneck layer, a decoder and an output layer which are sequentially connected, the encoder comprises a plurality of encoding modules, the decoder comprises a plurality of decoding modules, and each encoding module of the encoder is connected with a corresponding decoding module of the decoder through jump connection; Performing layer-by-layer downsampling and feature extraction on the ultrasonic image data through the encoder to obtain feature graphs of multiple layers; Performing feature transformation on the feature map of the deepest level through the bottleneck layer; upsampling the feature map layer by the decoder, and fusing the feature maps of corresponding layers in the encoder by the jump connection; and outputting the focus area obtained by image segmentation through the output layer.
  3. 3. The method of claim 2, wherein the encoding module comprises a first convolution module and a downsampling module, the first convolution module comprising a first convolution layer, a first batch of normalization layers, a first activation layer, a second convolution layer, a second batch of normalization layers, and a second activation layer connected in sequence, wherein the first convolution module further comprises a first residual connection that adds an input to an output of the first convolution module, and/or The decoding module comprises an up-sampling module and a second convolution module, wherein the second convolution module comprises a third convolution layer, a third batch of standardization layers, a third activation layer, a fourth convolution layer, a fourth batch of standardization layers and a fourth activation layer which are sequentially connected, and the second convolution module further comprises a second residual connection for adding the input and the output of the second convolution module.
  4. 4. The method of claim 2, wherein the U-Net neural network model further comprises an attention module configured to perform an attention weighting process on the feature map in the encoder and/or the decoder.
  5. 5. The method according to any one of claims 1 to 4, further comprising, prior to image recognition processing of the ultrasound image data, pre-processing the ultrasound image data, in particular comprising: Representing the ultrasound image data as a nonlinear product representation of a first component and a second component in a spatial domain; Performing a first transformation process on the nonlinear product representation to obtain first transformed data comprising a linear superposition representation of the first component and the second component; Performing frequency domain conversion processing on the first transformation data to obtain frequency domain data, wherein the frequency domain data comprises first frequency band data associated with the first component and second frequency band data associated with the second component; Applying a filter process to the frequency domain data to obtain modulated frequency domain data, wherein the filter has a first modulation factor for the first frequency band data and a second modulation factor for the second frequency band data, the first modulation factor being different from the second modulation factor; Performing space domain conversion processing on the modulated frequency domain data to obtain space domain data; And performing second transformation processing on the airspace data to obtain preprocessed ultrasonic image data.
  6. 6. The method of any one of claims 1 to 4, wherein the determining a zone to be ablated and a protection zone from the focal zone comprises: performing anatomical structure identification on the ultrasound image data to identify anatomical structures surrounding the focal zone; the protection zone is determined from the anatomical structure.
  7. 7. The method according to any one of claims 1 to 4, wherein the generating ultrasound ablation planning data from the region to be ablated and the protection region comprises: Determining an ablation angle range and a non-ablation angle range according to the spatial position relation between the area to be ablated and the protection area, wherein the ablation angle range covers the area to be ablated and the non-ablation angle range covers the protection area; determining the ablation element opening configuration according to the ablation angle range; the ablation element closed configuration is determined from the non-ablation angular range.
  8. 8. The method of claim 7, wherein the ultrasound image data comprises ultrasound image data of a lumen cross-section; The ultrasonic image data is subjected to image identification processing to identify a focus area, wherein the identification processing comprises the steps of identifying the focus area on the wall of the lumen according to the ultrasonic image data of the cross section of the lumen; Wherein the ablation elements are in the form of an annular array coaxially disposed with the lumen, the ablation angular range and the non-ablation angular range corresponding to different sectors in the annular array, respectively.
  9. 9. The method of any one of claims 1 to 4, wherein the image recognition processing of the ultrasound image data to identify a focal zone includes identifying the focal zone in tissue in vivo from the ultrasound image data; wherein the ablation elements are in the form of an extracorporeal focus array, the ablation element opening configuration comprises activating the ablation elements in the extracorporeal focus array directed toward the region to be ablated, and the ablation element closing configuration comprises closing the ablation elements in the extracorporeal focus array directed toward the protection region.
  10. 10. An ultrasound ablation system, comprising: an ultrasound imaging unit configured to acquire ultrasound image data; the image recognition module is configured to perform image recognition processing on the ultrasonic image data so as to recognize a focus area; the area determining module is configured to determine an area to be ablated and a protection area according to the focus area, wherein the area to be ablated covers the focus area, and the protection area is not overlapped with the area to be ablated; an ultrasound ablation module comprising a plurality of ablation elements; and the control module is configured to generate control signals according to the to-be-ablated area and the protection area, wherein the control signals comprise an opening signal for controlling the ablation element corresponding to the to-be-ablated area to be opened and a closing signal for controlling the ablation element corresponding to the protection area to be closed.
  11. 11. The ultrasound ablation system according to claim 10, wherein the image recognition module integrates a U-Net neural network model comprising an encoder, a bottleneck layer, a decoder, and an output layer connected in sequence, the encoder comprising a plurality of encoding modules, the decoder comprising a plurality of decoding modules, each encoding module of the encoder being connected to a corresponding decoding module of the decoder by a jump connection; the encoder is configured to perform layer-by-layer downsampling and feature extraction on the ultrasonic image data to obtain feature graphs of multiple layers; the bottleneck layer is configured to perform feature transformation on the feature map of the deepest hierarchy; the decoder is configured to upsample the feature map layer by layer and fuse the feature maps of the corresponding levels in the encoder through the jump connection; the output layer is configured to output the focal zone obtained by image segmentation.
  12. 12. The ultrasound ablation system according to claim 11, wherein the encoding module comprises a first convolution module and a downsampling module, the first convolution module comprising a first convolution layer, a first batch normalization layer, a first activation layer, a second convolution layer, a second batch normalization layer, and a second activation layer connected in sequence, wherein the first convolution module further comprises a first residual connection that adds an input to an output of the first convolution module, and/or The decoding module comprises an up-sampling module and a second convolution module, wherein the second convolution module comprises a third convolution layer, a third batch of standardization layers, a third activation layer, a fourth convolution layer, a fourth batch of standardization layers and a fourth activation layer which are sequentially connected, and the second convolution module further comprises a second residual connection for adding the input and the output of the second convolution module.
  13. 13. The ultrasound ablation system according to claim 11, wherein the U-Net neural network model further comprises an attention module configured to perform an attention weighting process on the feature map in the encoder and/or the decoder.
  14. 14. The ultrasound ablation system according to any of claims 10 to 13, further comprising a preprocessing module configured to preprocess the ultrasound image data prior to image recognition processing of the ultrasound image data by the image recognition module, Wherein, the preprocessing module includes: a representation sub-module configured to represent the ultrasound image data as a nonlinear product representation of a first component and a second component in a spatial domain; a first transformation sub-module configured to perform a first transformation process on the nonlinear product representation to obtain first transformed data, the first transformed data comprising a linear superposition representation of the first component and the second component; The frequency domain conversion sub-module is configured to perform frequency domain conversion processing on the first transformation data to obtain frequency domain data, wherein the frequency domain data comprises first frequency band data associated with the first component and second frequency band data associated with the second component; A filter configured to apply a filter process to the frequency domain data to obtain modulated frequency domain data, wherein the filter has a first modulation factor for the first frequency band data and a second modulation factor for the second frequency band data, the first modulation factor being different from the second modulation factor; the space domain conversion sub-module is configured to perform space domain conversion processing on the modulated frequency domain data to obtain space domain data; And the second transformation submodule is configured to perform second transformation processing on the airspace data to obtain preprocessed ultrasonic image data.
  15. 15. The ultrasound ablation system according to any of claims 10 to 13, wherein the region determination module is configured to: performing anatomical structure identification on the ultrasound image data to identify anatomical structures surrounding the focal zone; the protection zone is determined from the anatomical structure.
  16. 16. The ultrasound ablation system according to any of claims 10 to 13, wherein the control module is configured to: Determining an ablation angle range and a non-ablation angle range according to the spatial position relation between the area to be ablated and the protection area, wherein the ablation angle range covers the area to be ablated and the non-ablation angle range covers the protection area; Generating the opening signal according to the ablation angle range; The shut-off signal is generated from the non-ablative angular range.
  17. 17. The ultrasound ablation system according to claim 16, wherein the ultrasound imaging unit is configured to acquire ultrasound image data of a lumen cross-section; The image recognition module is configured to recognize the focal zone on a lumen wall according to the ultrasound image data of the lumen cross section; Wherein the plurality of ablation elements of the ultrasound ablation module are in the form of an annular array coaxially arranged with the lumen, and the ablation angle range and the non-ablation angle range correspond to different sectors in the annular array respectively.

Description

Ultrasonic ablation planning method and ultrasonic ablation system Technical Field The present application relates to the field of medical devices, and in particular, to an ultrasound ablation planning method, an ultrasound ablation system, an electronic device, and a program product. Background Ultrasonic ablation is a medical technique that utilizes ultrasonic energy to perform thermal ablation treatment on target tissue. Because of the characteristics of non-invasive or minimally invasive, the ultrasonic ablation technology is widely applied in the fields of tumor treatment, cardiovascular disease treatment and the like. In clinical applications of ultrasound ablation techniques, it is desirable to further enhance the efficacy and safety of the treatment. The description of the background art is only for the purpose of facilitating an understanding of the relevant art and is not to be taken as an admission of prior art. Disclosure of Invention The embodiment of the application aims to provide an ultrasonic ablation planning method, an ultrasonic ablation system, electronic equipment and a program product, which can automatically identify a focus area based on ultrasonic image data and generate ablation planning data comprising an area to be ablated and a protection area, thereby improving the accuracy and safety of ultrasonic ablation treatment. In a first aspect, an embodiment of the present application provides an ultrasound ablation planning method, including: Acquiring ultrasonic image data; Performing image recognition processing on the ultrasonic image data to recognize a focus area; determining a to-be-ablated zone and a protection zone according to the focus zone, wherein the to-be-ablated zone covers the focus zone, and the protection zone is not overlapped with the to-be-ablated zone; and generating ultrasonic ablation planning data according to the region to be ablated and the protection region, wherein the ultrasonic ablation planning data comprises an ablation element opening configuration corresponding to the region to be ablated and an ablation element closing configuration corresponding to the protection region. In some embodiments, the performing image recognition processing on the ultrasound image data to identify a focal zone includes: Inputting the ultrasonic image data into a U-Net neural network model, wherein the U-Net neural network model comprises an encoder, a bottleneck layer, a decoder and an output layer which are sequentially connected, the encoder comprises a plurality of encoding modules, the decoder comprises a plurality of decoding modules, and each encoding module of the encoder is connected with a corresponding decoding module of the decoder through jump connection; Performing layer-by-layer downsampling and feature extraction on the ultrasonic image data through the encoder to obtain feature graphs of multiple layers; Performing feature transformation on the feature map of the deepest level through the bottleneck layer; upsampling the feature map layer by the decoder, and fusing the feature maps of corresponding layers in the encoder by the jump connection; and outputting the focus area obtained by image segmentation through the output layer. In some embodiments, the encoding module comprises a first convolution module and a downsampling module, the first convolution module comprising a first convolution layer, a first batch of normalization layers, a first activation layer, a second convolution layer, a second batch of normalization layers, and a second activation layer connected in sequence, wherein the first convolution module further comprises a first residual connection that adds an input to an output of the first convolution module, and/or The decoding module comprises an up-sampling module and a second convolution module, wherein the second convolution module comprises a third convolution layer, a third batch of standardization layers, a third activation layer, a fourth convolution layer, a fourth batch of standardization layers and a fourth activation layer which are sequentially connected, and the second convolution module further comprises a second residual connection for adding the input and the output of the second convolution module. In some embodiments, the U-Net neural network model further comprises an attention module configured to perform attention weighting processing on feature maps in the encoder and/or the decoder. In some embodiments, before the image recognition processing is performed on the ultrasonic image data, the method further comprises the steps of preprocessing the ultrasonic image data, specifically comprising: Representing the ultrasound image data as a nonlinear product representation of a first component and a second component in a spatial domain; Performing a first transformation process on the nonlinear product representation to obtain first transformed data comprising a linear superposition representation of the first component