CN-121616836-B - Wound type detection method based on image segmentation and storage medium
Abstract
The invention relates to the technical field of intelligent wound image identification, and discloses a wound type detection method based on image segmentation and a storage medium. The method comprises the steps of synchronously acquiring a time sequence image of a wound and multi-element physiological data, generating a fusion characteristic map, and adaptively generating a preliminary region mask through a dynamic path selection network according to the fusion characteristic map. Then, the high-interest area image is acquired by mask clipping, and is input into a cascade segmentation network after enhancement processing. The network outputs refined wound contour by the first stage, and then the second stage performs tissue type classification segmentation on the interior under the contour constraint. And extracting tissue constitution and spatial distribution characteristics according to the segmentation result, and matching a predefined knowledge base to output a type result. According to the method, the multi-mode data and the cascade segmentation architecture are fused, so that the accuracy and the robustness of wound segmentation under the complex condition are improved, and the more accurate automatic judgment of wound types is realized.
Inventors
- JIANG QINGLING
- SONG HAINING
- Lv Suwei
- LIU YAPING
Assignees
- 中国人民解放军总医院第一医学中心
Dates
- Publication Date
- 20260508
- Application Date
- 20260203
Claims (8)
- 1. A method for detecting a wound type based on image segmentation, the method comprising: acquiring a time sequence image sequence of a target wound and synchronously recorded multi-element physiological monitoring data, and generating a fusion characteristic map; Generating a preliminary wound area mask through a dynamic path selection network based on the fused feature map, comprising: the dynamic path selection network comprises a plurality of parallel image segmentation sub-paths with heterogeneous structures; inputting the fusion characteristic map into a routing decision layer, and outputting a routing probability distribution vector by the routing decision layer, wherein the routing probability distribution vector corresponds to each image segmentation sub-path in a dynamic path selection network; selecting an image segmentation sub-path with highest probability as an activation path according to the routing probability distribution vector, and simultaneously, keeping the image segmentation sub-path with next highest probability as an auxiliary path; inputting the fusion characteristic map into the activation path and the auxiliary path simultaneously; The activation path outputs a primary segmentation activation graph, each pixel value in the primary segmentation activation graph representing a confidence that the location belongs to a wound area; The main segmentation activation diagram and the auxiliary segmentation activation diagram are subjected to weighted summation, and a binarized primary wound area mask is generated through a thresholding operation; Utilizing the preliminary wound area mask to perform area clipping on images in the time sequence image sequence to obtain a high-resolution wound attention area image; performing texture enhancement and boundary sharpening on the high-resolution wound region-of-interest image, and inputting the processed image into a cascade segmentation network; The first-stage network of the cascade segmentation network outputs a refined wound contour segmentation result, and the second-stage network of the cascade segmentation network receives the refined wound contour segmentation result and performs tissue type classification segmentation on the internal region of the wound under the constraint of the result; extracting tissue composition proportion features and space distribution features of a wound area according to the tissue type classification and segmentation results; combining the tissue composition proportion characteristic and the spatial distribution characteristic, matching corresponding wound type labels from a predefined wound type knowledge base, and outputting a final wound type detection result; the second level network of the cascade segmentation network receives the refined wound contour segmentation result and performs tissue type classification segmentation on the internal region of the wound under the constraint of the result, and the method comprises the following steps: cutting out an accurate wound internal area image from the image subjected to texture enhancement and boundary sharpening by taking the refined wound contour segmentation result as a space mask; Inputting the image of the wound interior region into the second-level network, the second-level network being composed of a plurality of parallel lightweight convolution branches, each branch focusing on extracting a feature of a preset tissue type; each lightweight convolution branch outputs a tissue type confidence map, and each pixel value in the map represents probability of belonging to the tissue type corresponding to the lightweight convolution branch; And (3) distributing the organization type label with highest confidence to each pixel through a cross-channel competition mechanism, and generating an organization type classification segmentation graph.
- 2. The method for detecting the type of the wound based on the image segmentation according to claim 1, wherein the step of acquiring the time-series image sequence of the target wound and the synchronously recorded multi-element physiological monitoring data to generate the fusion characteristic map comprises the following steps: performing multi-scale feature extraction on each frame of image in the time sequence image sequence, generating a pixel-level multi-scale feature map, and encoding the multi-element physiological monitoring data into a numerical feature vector; performing cross-modal fusion on the pixel-level multi-scale feature map and the numerical feature vector in a feature space to generate a fusion feature map; the multi-scale feature extraction is performed on each frame of image in the time sequence image sequence, a pixel-level multi-scale feature map is generated, and the multi-element physiological monitoring data are encoded into a numerical feature vector, comprising: adopting convolution layer groups with different convolution kernel sizes to parallelly process single-frame images in the time sequence image sequence, and extracting image features under different receptive fields; the image features under different receptive fields are subjected to channel dimension splicing, weights are distributed to the features of different channels through an attention weight generation layer, and a weighted pixel-level multi-scale feature map is generated; Synchronously, inputting multi-element physiological monitoring data comprising temperature, humidity and pH value into a fully-connected encoder; the fully-connected encoder maps the multi-element physiological monitoring data to a high-dimensional space through multi-layer nonlinear transformation and outputs numerical characteristic vectors with fixed dimensions.
- 3. The method for detecting a wound type based on image segmentation according to claim 2, wherein the cross-modal fusion of the pixel-level multi-scale feature map and the numerical feature vector in a feature space to generate a fused feature map comprises: expanding the numerical feature vector into a feature map with the same spatial dimension as the pixel-level multi-scale feature map through a spatial broadcasting operation; calculating a channel cross-correlation matrix between the pixel-level multi-scale feature map and the expanded numerical feature map; generating an affine transformation parameter matrix for modulating image features according to the channel cross-correlation matrix; And carrying out channel-by-channel feature transformation and fusion on the pixel-level multi-scale feature map by using the affine transformation parameter matrix to generate a fusion feature map.
- 4. The method of claim 1, wherein the performing texture enhancement and boundary sharpening on the high resolution wound region of interest image and inputting the processed image into a cascaded segmentation network comprises: Performing multidirectional texture feature calculation on the high-resolution wound attention area image by adopting a local binary pattern operator family to generate a texture intensity map; superposing the texture intensity map and the original image to enhance the texture contrast of the wound tissue; processing the image with the enhanced texture by using an anisotropic diffusion filter based on gradient, and sharpening the boundary between the wound and healthy skin while smoothing the image noise; and inputting the image subjected to texture enhancement and boundary sharpening into the cascade segmentation network.
- 5. The method of image segmentation-based wound type detection as set forth in claim 1, wherein the first-stage network of the cascaded segmentation network outputs refined wound contour segmentation results, comprising: the first-stage network adopts a coder decoder structure, the coder part of the first-stage network gradually downsamples to extract deep semantic features, and the decoder part gradually upsamples and combines jump connection features in the coding process; Predicting the position offset of each point on the wound contour in a dense manner using a contour point sequence prediction head at the decoder end of the first level network; and (3) applying the predicted position offset to an initial contour grid, and generating a closed and continuous refined wound contour segmentation result through iterative deformation.
- 6. The method for detecting the type of a wound based on image segmentation according to claim 5, wherein extracting the tissue composition scale feature and the spatial distribution feature of the wound region according to the result of the tissue type classification segmentation comprises: counting the total number of pixels occupied by each tissue type label in the tissue type classification segmentation graph, and calculating the ratio of the total number of pixels to the total number of pixels in a wound area to obtain the tissue composition proportion characteristic; calculating the geometric moment of a connected region of each tissue type in the tissue type classification segmentation map, wherein the geometric moment comprises a centroid position, a main axis direction and a space scattering matrix; Based on the centroid position, the principal axis direction and the spatial dispersion matrix, a spatial distribution feature vector describing the relative position and direction relationship of different tissue types in the wound area is generated.
- 7. The method of image segmentation based wound type detection according to claim 6, wherein matching corresponding wound type tags from a predefined wound type knowledge base in combination with the tissue constituent scale features and spatial distribution features comprises: The predefined wound type knowledge base stores a plurality of characteristic templates of standard wound types, and each characteristic template is composed of a standard organization composition proportion range and a standard spatial distribution mode; Calculating the coincidence degree between the tissue composition proportion characteristic and the standard tissue composition proportion range in each standard wound type characteristic template; calculating the structural similarity between the spatial distribution feature vector and the standard spatial distribution pattern in each standard wound type feature template; Weighting and fusing the coincidence degree and the structural similarity to generate a comprehensive matching score of each standard wound type label; and selecting a standard wound type label with the highest comprehensive matching score as the final wound type detection result.
- 8. A computer-readable storage medium, on which a computer program is stored, characterized in that the computer program, when executed by a processor, implements the image segmentation-based wound type detection method as claimed in any one of claims 1 to 7.
Description
Wound type detection method based on image segmentation and storage medium Technical Field The invention relates to the technical field of intelligent wound image identification, in particular to a wound type detection method based on image segmentation and a storage medium. Background In current wound care and diagnostic practice, automated, accurate identification of wound type relies primarily on computer vision analysis of wound appearance images. In the prior art, a single wound image is generally adopted as input, and semantic segmentation or classification of a wound area is directly completed by training a deep neural network model. These methods give the localization of the wound contour and the discrimination of the internal tissue type as a hybrid task to model learning or parallel processing through a multitasking learning framework. However, wound healing is a dynamic process, the appearance of which is affected by the local microenvironment, and a single image is difficult to fully reflect this complex physiological state. Unimodal image analysis ignores physiological background information of wound healing, resulting in models that are not robust enough to cope with image quality changes or complex wound manifestations. Coupling contour segmentation with tissue classification in one step, the model is prone to confusion in the region of boundary ambiguity, false classification of internal tissue may erode contour accuracy, and deviations in the contour may further amplify the false of internal classification. This mutual interference makes it difficult to stably output reliable tissue organization and spatial distribution characteristics for type interpretation, with limitations in the clinically required fineness of the separation results. Disclosure of Invention The present invention is directed to a method for detecting wound type based on image segmentation and a storage medium, which solve the problems set forth in the background art. To achieve the above object, the present invention provides a wound type detection method based on image segmentation, the method comprising: acquiring a time sequence image sequence of a target wound and synchronously recorded multi-element physiological monitoring data, and generating a fusion characteristic map; generating a preliminary wound area mask through a dynamic path selection network based on the fused feature map; Utilizing the preliminary wound area mask to perform area clipping on images in the time sequence image sequence to obtain a high-resolution wound attention area image; performing texture enhancement and boundary sharpening on the high-resolution wound region-of-interest image, and inputting the processed image into a cascade segmentation network; The first-stage network of the cascade segmentation network outputs a refined wound contour segmentation result, and the second-stage network of the cascade segmentation network receives the refined wound contour segmentation result and performs tissue type classification segmentation on the internal region of the wound under the constraint of the result; extracting tissue composition proportion features and space distribution features of a wound area according to the tissue type classification and segmentation results; And combining the tissue composition proportion characteristic and the spatial distribution characteristic, matching corresponding wound type labels from a predefined wound type knowledge base, and outputting a final wound type detection result. Preferably, the acquiring the time sequence image sequence of the target wound and the synchronously recorded multi-element physiological monitoring data, and generating the fusion characteristic map include: performing multi-scale feature extraction on each frame of image in the time sequence image sequence, generating a pixel-level multi-scale feature map, and encoding the multi-element physiological monitoring data into a numerical feature vector; performing cross-modal fusion on the pixel-level multi-scale feature map and the numerical feature vector in a feature space to generate a fusion feature map; the multi-scale feature extraction is performed on each frame of image in the time sequence image sequence, a pixel-level multi-scale feature map is generated, and the multi-element physiological monitoring data are encoded into a numerical feature vector, comprising: adopting convolution layer groups with different convolution kernel sizes to parallelly process single-frame images in the time sequence image sequence, and extracting image features under different receptive fields; the image features under different receptive fields are subjected to channel dimension splicing, weights are distributed to the features of different channels through an attention weight generation layer, and a weighted pixel-level multi-scale feature map is generated; Synchronously, inputting multi-element physiological monitoring data comprising temperat