Search

CN-122023334-A - Intelligent focus identification and quantitative analysis method for ultrasonic image

CN122023334ACN 122023334 ACN122023334 ACN 122023334ACN-122023334-A

Abstract

The invention discloses an intelligent focus recognition and quantitative analysis method for an ultrasonic image, which relates to the technical field of medical images and comprises the following steps of S1, collecting a B-ultrasonic gray level image, an elastography strain image and a color Doppler blood flow image of a target part, extracting complementary features through a feature level fusion strategy after self-adaptive noise suppression processing to obtain a multi-mode fusion feature image, S2, inputting the multi-mode fusion feature image into a double-branch recognition model, distributing weights by combining a dynamic attention mechanism, and performing cross-equipment self-adaptive calibration processing. The intelligent focus identification and quantitative analysis method for the ultrasonic image, provided by the invention, has the advantages that the accuracy and clinical applicability of focus detection are obviously improved by integrating multi-mode ultrasonic information and intelligent processing flow, and the problem of inaccurate identification caused by noise interference, difficulty in identification of tiny focuses and large imaging difference between devices in the traditional ultrasonic image analysis is effectively solved.

Inventors

  • WANG DI
  • XU JING
  • ZHENG LING

Assignees

  • 舟山医院

Dates

Publication Date
20260512
Application Date
20260130

Claims (10)

  1. 1. An ultrasonic image intelligent focus identification and quantitative analysis method is characterized by comprising the following steps: S1, acquiring a B ultrasonic gray level image, an elastography strain image and a color Doppler blood flow image of a target part, and extracting complementary features through a feature level fusion strategy after self-adaptive noise suppression processing to obtain a multi-mode fusion feature image; S2, inputting the multi-mode fusion feature map into a double-branch recognition model, combining a dynamic attention mechanism to distribute weights, and outputting a focus segmentation result and preliminary positioning information through cross-equipment self-adaptive calibration processing; S3, extracting four-dimensional quantization indexes of morphology, texture, blood flow and hardness based on the segmentation result, endowing dynamic weight through a pathology association weight distribution strategy, and outputting fusion quantization scores and dimension parameters; And S4, receiving a doctor fine tuning instruction through man-machine interaction, feeding fine tuning data back to model optimization, and finally outputting a focus identification report and a quantitative analysis result which accord with clinical guidelines.
  2. 2. The method for intelligent lesion recognition and quantitative analysis based on the ultrasonic image of claim 1, wherein S1 comprises the following steps: based on ultrasonic diagnostic equipment and a multi-mode imaging probe, carrying out multi-mode synchronous scanning on a target focus area, obtaining a B-ultrasonic gray level image, an elastography strain map and a color multi-common blood flow image, and respectively carrying out image preprocessing; Setting a target focus area as a scanning center, and acquiring an initial B ultrasonic gray image as a positioning reference; applying internal tissue vibration to the elastography mode, collecting tissue strain distribution data, and generating an elastography strain map; enabling a color Doppler blood flow imaging function, adjusting blood flow imaging parameters, collecting blood flow speed and direction information, and generating a color Doppler blood flow image; and synchronously registering the spatial positions and scanning time sequences of the three types of images to finish the acquisition and preliminary integration of the multi-mode ultrasonic images.
  3. 3. The method for intelligent lesion recognition and quantitative analysis based on the ultrasonic image of claim 2, wherein S1 further comprises: Noise analysis is respectively carried out on each mode image based on the acquired and preliminarily registered B ultrasonic gray level image, the elastic imaging strain image and the color Doppler blood flow image, and noise suppression parameters are dynamically determined based on gray level gradient distribution and blood flow signal intensity change of a focus area and normal tissues; according to the determined parameters, performing self-adaptive median filtering on the B-ultrasonic gray level image, performing wavelet threshold filtering on the elastography strain image, performing total variation denoising on the color Doppler blood flow image, and retaining key characteristics of each mode while suppressing noise; respectively extracting features of each mode image after denoising, wherein the B-ultrasonic gray level image extracts morphology and texture features, the elastography strain map extracts hardness distribution features, and the color Doppler blood flow image extracts blood flow perfusion and blood vessel distribution features; And calculating the correlation weight of each modal feature in focus recognition through an attention weighted fusion strategy, and carrying out weighted fusion on the features according to the correlation weight to generate a multi-modal fusion feature map fusing the morphology, texture, blood flow and hardness information.
  4. 4. The method for intelligent lesion recognition and quantitative analysis of ultrasonic image according to claim 3, wherein the method for constructing the dual-branch recognition model is as follows: defining a double-branch network overall architecture, comprising a global feature extraction branch and a local edge enhancement branch, and setting a parallel processing relationship and a feature interaction interface of the two branches in the network; constructing a global feature extraction branch, adopting a ResNet-50-based backbone network for improvement, removing an original classification head, and reserving and adjusting a convolution layer and residual block structure to concentrate on extracting the overall morphological feature and the context position information of a focus area; Constructing local edge enhancement branches, designing a sequence structure formed by a cavity convolution layer to enlarge a characteristic receptive field, then introducing a Canny edge detection operator to carry out edge enhancement treatment on a characteristic map, and capturing the edge of a focus and local detail characteristics of the focus; and designing a double-branch feature fusion strategy, performing channel splicing on the high-level semantic feature map output by the global branch and the detail enhancement feature map output by the local branch, and performing feature integration and dimension reduction through a 1X 1 convolution layer to form a unified double-branch fusion feature expression.
  5. 5. The method for intelligent lesion recognition and quantitative analysis according to claim 4, wherein the assigning weights by combining dynamic attention mechanisms is as follows: integrating a double-branch fusion feature graph output by the global feature extraction branch and the local edge enhancement branch to serve as an input feature of a dynamic attention mechanism; constructing an attention map to generate a sub-network, wherein the sub-network is input by the fusion characteristic map, and outputs an initial attention thermodynamic diagram with the same space size as the input characteristic map through sequence operation comprising convolution and an activation function; Establishing a clinical priori knowledge base to integrate medical priori rules, wherein the medical priori rules comprise typical position distribution, morphological characteristics and echo modes of a focus of a target organ; encoding the clinical priori knowledge into a spatial weight template, and performing weighted addition on the spatial weight template and the initial attention thermodynamic diagram to generate a dynamic attention diagram fused with clinical guideline knowledge; And multiplying the generated dynamic attention map with the original double-branch fusion feature map element by element, and outputting the feature map after attention modulation.
  6. 6. The method for intelligent lesion recognition and quantitative analysis of an ultrasound image according to claim 5, wherein the adaptive calibration process specifically comprises: Acquiring an ultrasonic image training data set which is from a plurality of ultrasonic devices of different types and contains corresponding focus marks as source domain data, and taking device data corresponding to a current input attention modulation characteristic diagram as target domain data; Constructing a domain self-adaptive neural network, wherein the self-adaptive neural network comprises a shared feature extractor, a domain classifier and a focus segmentation predictor, the shared feature extractor is used for extracting high-level features from source domain and target domain data, the domain classifier is used for distinguishing equipment domains from which the features are derived, and the focus segmentation predictor is used for carrying out focus segmentation based on the extracted features; Optimizing a network based on an countermeasure training strategy, and maximizing classification errors of the domain classifier and extracting general characteristics irrelevant to equipment when updating the shared characteristic extractor in the training process; Under the countermeasure training framework, the focus segmentation predictor is synchronously optimized, a shared feature extractor in the trained domain adaptive neural network is applied to the processing of the attention modulation feature map, and calibrated features with equipment independence are output.
  7. 7. The method for intelligent lesion recognition and quantitative analysis based on the ultrasonic image of claim 6, wherein the output of the lesion segmentation result and the preliminary positioning information is specifically: Inputting the calibrated feature map into a focus segmentation prediction network, applying threshold segmentation processing to the focus probability prediction map, binarizing the prediction map through a predefined probability threshold, and generating an initial focus binary segmentation mask; sequentially performing morphological closing operation and opening operation on the initial focus binary segmentation mask to obtain a final focus segmentation mask; Identifying pixel connected regions based on the final lesion segmentation mask, determining each connected region as an independent lesion target; For each focus target, extracting the minimum circumscribed rectangle, taking the geometric center coordinate of the rectangle as the preliminary positioning information of the focus in the image, and calculating the area, the long axis length and the short axis length of the focus; and summarizing the final segmentation mask, the positioning information and the morphological parameters corresponding to all the focus targets, and taking the final segmentation mask, the positioning information and the morphological parameters as the output focus segmentation result and the preliminary positioning information.
  8. 8. The method for intelligent lesion recognition and quantitative analysis based on the ultrasonic image of claim 7, wherein S3 comprises: according to the focus segmentation result output by the S2, four types of quantization indexes are respectively extracted from the corresponding areas of the original B-ultrasonic gray level image, the elastography strain image and the color Doppler blood flow image; Extracting morphological indexes and texture indexes from a focus area of the B ultrasonic gray level image, wherein the morphological indexes comprise area, perimeter, long axis length, short axis length and circularity, and the texture indexes are obtained by calculating a gray level co-occurrence matrix and comprise entropy, contrast and correlation; extracting hardness indexes from the corresponding focus areas of the elastography strain map, wherein the hardness indexes comprise average strain rate, strain rate ratio and elasticity score; blood flow indexes are extracted from the corresponding focus areas of the color Doppler blood flow image, wherein the blood flow indexes comprise average intensity of blood flow signals, blood vessel density and consistency of blood flow direction distribution.
  9. 9. The method for intelligent lesion recognition and quantitative analysis based on the ultrasonic image of claim 8, wherein S3 further comprises: constructing a pathology association weight model, wherein a training set of the model comprises a historical focus ultrasonic image, the corresponding four-dimensional quantitative index and a diagnosis result of benign and malignant diagnosis through pathology biopsy; Training the pathology association weight model by adopting a logistic regression algorithm, taking a quantization index as an input characteristic, taking a pathology diagnosis result as a label, and learning to obtain initial weight coefficients of each index for distinguishing benign and malignant; Optimizing the initial weight coefficient by a gradient descent method to obtain a dynamic weight set finally used for calculating a fusion quantization score; for the focus to be analyzed, inputting the four-dimensional quantization indexes extracted by the focus to a trained pathology association weight model, carrying out weighted summation on the indexes by using the dynamic weight set, and calculating to obtain a fusion quantization score of the focus; Outputting the quantization parameters of each dimension of the focus and the corresponding dynamic weights thereof as a quantization analysis result.
  10. 10. The method for intelligent lesion recognition and quantitative analysis based on the ultrasonic image of claim 9, wherein S4 comprises the steps of: providing a human-computer interaction interface, wherein the interface simultaneously displays the focus segmentation contour output by the step S2 to be covered on the original ultrasonic image, and the quantization parameters and the fusion quantization scores of each dimension output by the step S3; Receiving a manual adjustment instruction of a doctor on the focus segmentation contour boundary and a modification instruction of a quantization parameter judgment threshold value through the human-computer interaction interface, and taking the adjusted contour data and the modified parameter threshold value as fine adjustment data records; Adding the fine tuning data to a model training data set, updating parameters of the double-branch recognition model and the pathology associated weight model in a transfer learning mode, and completing model optimization iteration; based on a clinical diagnosis guideline rule base, automatically rechecking and filtering the quantization parameters and the fusion scores, and eliminating feature items which do not accord with guideline specifications or clinical taboos; And integrating the focus segmentation result, the quantitative analysis result, the doctor fine tuning annotation and the model confidence coefficient after rechecking the clinical guideline, and generating and outputting a structured focus identification report and a quantitative analysis result document.

Description

Intelligent focus identification and quantitative analysis method for ultrasonic image Technical Field The invention relates to the technical field of medical images, in particular to an intelligent focus identification and quantitative analysis method for an ultrasonic image. Background The ultrasonic technology has become a first line means of clinical diagnosis of multiple organs such as mammary gland, thyroid gland, liver, gall and pancreas by virtue of the advantages of noninvasive, real-time imaging and no ionizing radiation, and plays a key role in early focus screening, qualitative assessment and curative effect follow-up. With the development of computer aided diagnosis technology, artificial intelligent algorithms such as deep learning and the like are gradually integrated into ultrasonic image analysis, so that automatic positioning, feature extraction and preliminary quantification of focus areas are realized, the workload of doctors is effectively reduced, the subjectivity of diagnosis is reduced, and important support is provided for clinical accurate diagnosis and treatment. At present, the field forms a technical system from image preprocessing and focus recognition to multi-parameter quantification, covers wide application scenes from superficial organs to deep organs, and becomes an important development direction of intelligent diagnosis of medical images. The ultrasonic image has inherent characteristics such as depth attenuation effect, speckle noise, artifact and the like, challenges are brought to focus feature extraction, and boundary identification of focuses with complex forms and tiny focus detection still face a test. The existing intelligent analysis method has a lifting space in the aspects of adapting to imaging differences of different ultrasonic equipment, balancing algorithm precision and instantaneity, and part of models have higher dependence on high-quality labeling data and are still to be further improved in the aspects of integration optimization of multidimensional quantitative indexes and clinical scene generalization capability. Meanwhile, personalized quantitative analysis schemes aiming at different focus types are not completely mature, and the requirements of clinical accurate diagnosis and personalized treatment are difficult to fully meet. In this regard, we propose an intelligent lesion recognition and quantitative analysis method for ultrasound images. Disclosure of Invention In order to solve the technical problems, the intelligent focus identification and quantitative analysis method for the ultrasonic image is provided, and the technical scheme solves the problems that the inherent characteristics of the ultrasonic image bring challenges to focus feature extraction, complexity and micro focus identification difficulty are high, cross-equipment adaptability and clinical generalization capability are insufficient, dependence on high-quality labeling data is high, multidimensional quantitative index integration and optimization is insufficient, a personalized analysis scheme is immature, and clinical accurate diagnosis and treatment requirements are difficult to fully meet. In order to achieve the above purpose, the invention adopts the following technical scheme: An ultrasonic image intelligent focus recognition and quantitative analysis method comprises the following steps: S1, acquiring a B ultrasonic gray level image, an elastography strain image and a color Doppler blood flow image of a target part, and extracting complementary features through a feature level fusion strategy after self-adaptive noise suppression processing to obtain a multi-mode fusion feature image; S2, inputting the multi-mode fusion feature map into a double-branch recognition model, combining a dynamic attention mechanism to distribute weights, and outputting a focus segmentation result and preliminary positioning information through cross-equipment self-adaptive calibration processing; S3, extracting four-dimensional quantization indexes of morphology, texture, blood flow and hardness based on the segmentation result, endowing dynamic weight through a pathology association weight distribution strategy, and outputting fusion quantization scores and dimension parameters; And S4, receiving a doctor fine tuning instruction through man-machine interaction, feeding fine tuning data back to model optimization, and finally outputting a focus identification report and a quantitative analysis result which accord with clinical guidelines. Preferably, the S1 includes: based on ultrasonic diagnostic equipment and a multi-mode imaging probe, carrying out multi-mode synchronous scanning on a target focus area, obtaining a B-ultrasonic gray level image, an elastography strain map and a color multi-common blood flow image, and respectively carrying out image preprocessing; Setting a target focus area as a scanning center, and acquiring an initial B ultrasonic gray image as a positioning reference