Search

CN-117437189-B - Image auxiliary diagnosis method, device and storage medium based on instance segmentation

CN117437189BCN 117437189 BCN117437189 BCN 117437189BCN-117437189-B

Abstract

The application relates to an image aided diagnosis method, device and storage medium based on example segmentation, which comprises the steps of obtaining a three-dimensional scan image corresponding to a chest of a target object, slicing the three-dimensional scan image to obtain a plurality of two-dimensional slice images, segmenting and extracting a target organ region in the plurality of two-dimensional slice images by using a trained example segmentation model Mask R-CNN to obtain a two-dimensional scan image corresponding to each two-dimensional slice image, detecting target nodes in the target organ region of the two-dimensional scan image by using a trained target recognition network to obtain focus information, wherein the focus information comprises target information of the target nodes, and merging and classifying the target nodes based on the target information corresponding to each two-dimensional scan image so as to determine a diagnosis result corresponding to the three-dimensional scan image. The application solves the problems of low efficiency and low accuracy in identifying the target nodule by the method for image auxiliary diagnosis in the related technology.

Inventors

  • LI JIAXIN
  • ZHANG MINGLIANG
  • Gong xueyuan
  • ZHANG XINYUAN
  • ZHANG HUAJIAN
  • ZHANG ZHIYAO
  • JIANG YUANHAO
  • Chen Yangkai
  • LIN ZHENGUAN
  • WEI WENBIN
  • LIU XIAOXIANG
  • LIN CONG

Assignees

  • 暨南大学

Dates

Publication Date
20260512
Application Date
20231017

Claims (8)

  1. 1. An image aided diagnosis method based on instance segmentation is characterized by comprising the following steps: Acquiring a three-dimensional scanning image corresponding to the thoracic cavity of a target object, and slicing the three-dimensional scanning image to obtain a plurality of two-dimensional slice images; In the two-dimensional slice images, segmenting and extracting a target dirty region by using a trained example segmentation model Mask R-CNN to obtain a two-dimensional scan image corresponding to each two-dimensional slice image, wherein the two-dimensional scan image comprises the target dirty region; Detecting target nodules in the target organ regions of the two-dimensional scanning patterns by using a trained target recognition network to obtain focus information, wherein the focus information comprises target information of the target nodules, the target recognition network is a neural network which is based on YOLOv algorithm and is trained according to a preset two-dimensional sample scanning pattern data set and actual focus information corresponding to the two-dimensional sample scanning pattern of the two-dimensional sample scanning pattern data set, the two-dimensional sample scanning pattern is generated by extracting target organ regions in a two-dimensional sample slice pattern by using the Mask R-CNN and carrying out preset anchor frame labeling, and the two-dimensional sample slice pattern is generated by slicing a three-dimensional sample scanning pattern and an anchor frame is used for determining the target nodules; performing target nodule merging and classifying based on target information corresponding to each two-dimensional scanning image to determine a diagnosis result corresponding to the three-dimensional scanning image, wherein the diagnosis result comprises the number, the size and the position of the target nodules in the three-dimensional scanning image, and the method further comprises the following steps: Acquiring first anchor frame information corresponding to the target nodule in the target information corresponding to each two-dimensional scanning chart, wherein the first anchor frame information comprises a corresponding first nodule bounding box and a central point coordinate parameter of the first nodule bounding box; Traversing the central point coordinate parameters of the first nodule bounding boxes corresponding to all the two-dimensional scan patterns in sequence, and sequentially calculating the distances between the central points of two adjacent first nodule bounding boxes according to the central point coordinate parameters; Judging whether the distance is not greater than a preset distance threshold value, and classifying the corresponding two first nodule bounding boxes into corresponding first bounding box groups under the condition that the distance is not greater than the preset distance threshold value so as to obtain a plurality of first bounding box groups; Determining a number of the target nodules based on the number of groups of the first bounding box groupings; And determining the size of one target nodule corresponding to the first bounding box group according to target nodule bounding boxes selected from all the first nodule bounding boxes of the first bounding box group, and determining the position of one target nodule corresponding to the first bounding box group according to the central point coordinate parameters corresponding to the target nodule bounding boxes.
  2. 2. The method of claim 1, wherein in the event that the distance is determined to be greater than a threshold distance threshold, the method further comprises: classifying the first nodule bounding boxes which are ranked in front of the two first nodule bounding boxes currently participating in calculation into corresponding first bounding box groups; And sequentially calculating the distances between the center points of the two adjacent first nodule bounding boxes from the first nodule bounding boxes which are sequenced after the first nodule bounding boxes in the two first nodule bounding boxes which are currently participated in calculation, and selecting the first nodule bounding boxes with the distances not larger than a preset distance threshold until the distances are larger than the preset distance threshold or traversing the first nodule bounding boxes which are remained to be classified, so as to obtain another first bounding box group, wherein the first nodule bounding box sequenced after the first nodule bounding box in the two first nodule bounding boxes which are currently participated in calculation is the first nodule bounding box of the other first bounding box group.
  3. 3. The method of claim 1, wherein the first anchor frame information further includes a first anchor frame size corresponding to the first nodule bounding box, wherein selecting a target nodule bounding box from all of the first nodule bounding boxes grouped by the first bounding box includes selecting the first nodule bounding box with the largest first anchor frame size from all of the first nodule bounding boxes grouped by the first bounding box, and obtaining the target nodule bounding box.
  4. 4. The method of claim 1, wherein detecting target nodules in the target dirty region of the two-dimensional scan using a trained target recognition network, resulting in lesion information, comprises: performing nodule target detection on the two-dimensional scanned image by using the target recognition network to obtain label information and second anchor frame information corresponding to a candidate target, wherein the label information comprises a nodule category corresponding to the candidate target and target confidence corresponding to the nodule category, and the second anchor frame information comprises a second bounding box corresponding to the candidate target, a central point coordinate parameter of the second bounding box and a second anchor frame size; Selecting a candidate nodule from a plurality of candidate targets according to the nodule category, and judging whether the target confidence coefficient corresponding to the candidate nodule is larger than a preset confidence coefficient threshold value or not; And under the condition that the target confidence coefficient corresponding to the candidate nodule is larger than a preset confidence coefficient threshold value, determining the candidate nodule as the target nodule, and taking the second anchor frame information corresponding to the candidate nodule as the target information corresponding to the target nodule to obtain the focus information.
  5. 5. The method of claim 4, wherein the step of determining the position of the first electrode is performed, Determining that the target nodule does not exist in the two-dimensional scan under the condition that the target confidence coefficient corresponding to the candidate nodule is not larger than a preset confidence coefficient threshold value; And under the condition that the target confidence degrees corresponding to all the candidate nodules are not larger than a preset confidence degree threshold value, determining that the diagnosis result comprises that the target nodules are not existing in the three-dimensional scan map.
  6. 6. The method of claim 1, wherein the training step of the target recognition network comprises: Slicing the three-dimensional sample scanned image according to a preset slicing direction to generate a first sample slice image set corresponding to the two-dimensional sample slice image, wherein the three-dimensional sample scanned image comprises a spherical center coordinate parameter and a spherical surface size parameter of a target nodule, and the first sample slice image set comprises a first spherical center sample slice image and a first non-spherical center plane slice image which correspond to a plane where the spherical center of the target nodule is located; Dividing and extracting a target organ region by using the Mask R-CNN for the first sphere center sample slice and the first non-sphere center plane slice, and determining corresponding target organ regions in the first sphere center sample slice and the first non-sphere center plane slice; Determining spherical radii of the nodes in the first spherical sample section and the first non-spherical sample section according to the height of each first non-spherical sample section relative to the first spherical sample section, the spherical coordinate parameters of the target nodes and the spherical size parameters of the nodes, and performing anchor frame labeling on the first spherical sample section and the first non-spherical sample section which determine the target organ area according to the determined spherical radii and the spherical coordinate parameters of the target nodes to generate a second spherical sample section and a second non-spherical sample section; Training YOLOv algorithm by using the real-diagnosis lesion information corresponding to the three-dimensional sample scan, the second sphere center sample slice and the second non-sphere center plane slice until convergence, and obtaining the target identification network.
  7. 7. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, the processor being arranged to run the computer program to perform the steps of the instance segmentation based image aided diagnosis method of any one of claims 1 to 6.
  8. 8. A computer-readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the instance segmentation based image aided diagnosis method of any one of claims 1 to 6.

Description

Image auxiliary diagnosis method, device and storage medium based on instance segmentation Technical Field The application relates to the technical field of digital image processing, in particular to an image auxiliary diagnosis method, an image auxiliary diagnosis device and a storage medium based on example segmentation. Background In the medical field, medical images are often interpreted by radiologists, but it is a difficult task for a doctor to examine three-dimensional organ voxels (e.g., lung, liver) from two-dimensional CT images layer by layer. Since the CT scan contains a lot of information about the target dirty nodule, the doctor is prone to misjudge the disease and miss diagnosis, that is, make a False negative (FALSE NEGATIVE, FN) diagnosis result, or the non-lesion may be interpreted as a lesion, that is, a False Positive (FP) result, so that it is difficult to interpret and identify the cells from the CT, and the detection of the target dirty nodule is greatly restricted. In the prior art, a new technical means is provided for improving diagnosis of various diseases by adopting an artificial intelligence technology to perform influence auxiliary diagnosis detection. However, in the prior art, when three-dimensional scanning is performed on a target organ and target detection is performed on a scanned three-dimensional CT image, a great amount of calculation force is wasted on non-lung areas, and the efficiency and accuracy of identifying target nodules are low. At present, aiming at the problem that the efficiency and the accuracy of identifying the target nodule are low by a method for image auxiliary diagnosis in the related technology, no effective solution is proposed yet. Disclosure of Invention The embodiment of the application provides an image auxiliary diagnosis method, an image auxiliary diagnosis device and a storage medium based on example segmentation, which at least solve the problems of low efficiency and low accuracy in identifying target nodules by using an image auxiliary diagnosis method in the related art. In a first aspect, an embodiment of the present application provides an image assisted diagnosis method based on instance segmentation, including obtaining a three-dimensional scan image corresponding to a chest of a target object, performing slicing processing on the three-dimensional scan image to obtain a plurality of two-dimensional slice images, segmenting and extracting a target organ region in the plurality of two-dimensional slice images by using a trained instance segmentation model Mask R-CNN to obtain a two-dimensional scan image corresponding to each two-dimensional slice image, wherein the two-dimensional scan image includes the target organ region, detecting target nodules in the target organ region of the two-dimensional scan image by using a trained target recognition network to obtain focus information including target information of the target nodules, wherein the target recognition network is based on a YOLOv algorithm, and performs training on a neural network according to a preset two-dimensional sample scan image dataset and real-time Mask information corresponding to the two-dimensional sample scan image of the two-dimensional sample scan image dataset, the two-dimensional scan image is obtained by using the Mask R-CNN, performing extraction on the target organ region in the two-dimensional slice image, detecting target nodules in the target organ region by using a trained target recognition network, detecting target nodules in the target organ region of the two-dimensional scan image includes the target object recognition network, and performing three-dimensional sample image classification on the three-dimensional sample image, and determining the three-dimensional sample nodule position corresponding to the target nodules, and determining the three-dimensional sample nodule size based on the target nodule classification result. In a second aspect, an embodiment of the present application provides an electronic device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor implements the steps of the image-assisted diagnosis method based on instance segmentation according to the first aspect when the processor executes the computer program. In a third aspect, an embodiment of the present application provides a storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the example segmentation based image aided diagnosis method of the first aspect described above. Compared with the related art, the image aided diagnosis method, the device and the storage medium based on the instance segmentation provided by the embodiment of the application are used for obtaining a plurality of two-dimensional slice images by obtaining a three-dimensional scan image corresponding to a chest of a target object, dividing and extracti