Search

EP-4524886-B1 - LESION DETECTION METHOD AND LESION DETECTION PROGRAM

EP4524886B1EP 4524886 B1EP4524886 B1EP 4524886B1EP-4524886-B1

Inventors

  • BABA, KOZO
  • TAKEBE, HIROAKI
  • MIYAZAKI, NOBUHIRO

Dates

Publication Date
20260513
Application Date
20240809

Claims (4)

  1. A lesion detection apparatus (100) comprising: a memory (102); and a processor (101) coupled to the memory (102) and configured to execute a learning process of classifying a plurality of first tomographic images obtained by imaging an inside of a plurality of first human bodies at predetermined intervals in the height direction of the first human bodies into a plurality of first tomographic image groups on the basis of the degree of accumulation of fat in a specific organ, and generating a plurality of first lesion identification models for identifying whether or not each unit image region included in a tomographic image as an identification target is a specific lesion region by machine learning which uses each of the plurality of first tomographic image groups as learning data, the unit image region being a pixel or a set of the predetermined number of two or more adjacent pixels; and a lesion detection process of calculating, as a first image feature amount, an average luminance value in the region of the specific organ of a plurality of second tomographic images obtained by imaging an inside of a second human body at predetermined intervals in the height direction of the second human bodies, acquiring a probability that each of the unit image regions included in the plurality of second tomographic images is the specific lesion region from each of the plurality of first lesion identification models, by inputting the plurality of second tomographic images to each of the plurality of first lesion identification models, determining a weight coefficient corresponding to each of the plurality of first lesion identification models based on the first image feature amount, calculating, for each of the unit image regions included in the plurality of second tomographic images, an integration value by integrating the probabilities acquired from each of the plurality of first lesion identification models and multiplied by the weight coefficient, and detecting the specific lesion region from each of the plurality of second tomographic images based on the integration value.
  2. The lesion detection apparatus (100) according to claim 1, wherein in the calculating of the integration value, the integration value is calculated, and as the average luminance value is lower, a higher weight coefficient is set for the probability from a first lesion identification model generated by using the first tomographic image group which has a higher accumulation degree of fat.
  3. The lesion detection apparatus (100) according to claim 1, wherein the learning process includes a process of calculating, for each second tomographic image group into which the plurality of first tomographic images are classified for each of the first human bodies, a second image feature amount of the same type as the first image feature amount, classifying the plurality of first tomographic images into a plurality of third tomographic image groups according to a range of the second image feature amount, and generating a plurality of second lesion identification models for identifying whether or not each of the unit image regions included in the tomographic image as the identification target is the specific lesion region by machine learning which uses each of the plurality of third tomographic image groups as learning data, the lesion detection process includes a process of acquiring the probability for each of the unit image regions included in the plurality of second tomographic images from each of the plurality of second lesion identification models, by inputting the plurality of second tomographic images to each of the plurality of second lesion identification models, and in the calculating of the integration value, for each of the unit image regions included in the plurality of second tomographic images, the integration value is calculated by integrating the probabilities acquired from each of the plurality of first lesion identification models and each of the plurality of second lesion identification models, based on the first image feature amount.
  4. A lesion detection program for causing a computer to execute: a learning process of classifying a plurality of first tomographic images obtained by imaging an inside of a plurality of first human bodies at predetermined intervals in the height direction of the first human bodies into a plurality of first tomographic image groups n the basis of the degree of accumulation of fat in a specific organ, and generating a plurality of first lesion identification models for identifying whether or not each unit image region included in a tomographic image as an identification target is a specific lesion region by machine learning which uses each of the plurality of first tomographic image groups as learning data, the unit image region being a pixel or a set of the predetermined number of two or more adjacent pixels; and a lesion detection process of calculating, as a first image feature amount, an average luminance value in the region of the specific organ of a plurality of second tomographic images obtained by imaging an inside of a second human body at predetermined intervals in the height direction of the second human bodies, acquiring a probability that each of the unit image regions included in the plurality of second tomographic images is the specific lesion region from each of the plurality of first lesion identification models, by inputting the plurality of second tomographic images to each of the plurality of first lesion identification models, determining a weight coefficient corresponding to each of the plurality of first lesion identification models based on the first image feature amount, calculating, for each of the unit image regions included in the plurality of second tomographic images, an integration value by integrating the probabilities acquired from each of the plurality of first lesion identification models and multiplied by the weight coefficient, and detecting the specific lesion region from each of the plurality of second tomographic images based on the integration value.

Description

FIELD The embodiments discussed herein are related to a lesion detection method and a non-transitory computer-readable recording medium storing a lesion detection program. BACKGROUND Medical images obtained by computed tomography (CT), magnetic resonance imaging (MRI), or the like are widely used for diagnosis of various diseases. For image diagnosis using the medical images, a doctor has to interpret a large number of images, which imposes a large burden on the doctor. For this reason, there is a demand for a technique for supporting diagnosis work of the doctor by a computer in some form. As an example of such a technique, there is a technique for detecting a lesion region from a medical image by using a trained model generated by machine learning. For example, an artificial intelligence pipeline for lesion detection and classification including a plurality of trained machine learning models is proposed. Prior art document D1, "Brain Tumor Detection and Segmentation from Magnetic Resonance Image Data Using Ensemble Learning Methods", GYORFI AGNES, XP033667947 refers to an evaluation framework designed to test the accuracy and efficiency of ensemble learning algorithms deployed for brain tumor segmentation using the BraTS 2016 train data set. Within this category of machine learning algorithms, random forest was found the most appropriate, both in terms of precision and runtime. Prior art document D2, CN 109 635 664 A relates to a fatigue driving detection method based on illumination detection, and belongs to the field of computer detection. It proposes a fatigue driving detection method based on illumination detection. In order to solve the above problems, in the training process, the collected pictures are classified according to the illumination intensity, and the picture sets of different illumination intensities are separately trained, respectively. Corresponding classification model; in the prediction process, according to the current illumination intensity, it is judged which classification model needs to be used for prediction, so that the classification is more refined, so that the detection accuracy is higher. Citation List Patent Literature U.S. Patent Application Publication No. 2022/0270254 TECHNICAL PROBLEM Meanwhile, in the detection process of the lesion region using the trained model, it is possible to increase detection accuracy as a difference in pixel value (for example, luminance value) between the lesion region and a normal region that is not a lesion is larger. Meanwhile, there are many cases where the difference in pixel value between the lesion region and the normal region is small in an actually captured medical image, and for this reason, it is difficult to increase the detection accuracy, in some cases. In a case where a medical event that may affect the pixel value such as fat accumulated in an organ occurs, the difference in pixel value between the lesion region and the normal region may be smaller than usual. In some cases, a magnitude relationship in pixel value between the lesion region and the normal region may be reversed. According to one aspect, an object of the present disclosure is to provide a lesion detection method and a lesion detection program capable of improving detection accuracy of a lesion region from a medical image. SOLUTION TO PROBLEM This object is accomplished by the subject-matter of the independent claims. The dependent claims concern particular embodiments. The object and advantages of the invention will be realized and attained by means of the elements and combinations particularly pointed out in the claims. It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory and are not restrictive of the invention. EFFECTS OF INVENTION According to one aspect, detection accuracy of a lesion region from a medical image is improved. BRIEF DESCRIPTION OF DRAWINGS FIG. 1 is a diagram illustrating a configuration example and a process example of an information processing apparatus according to a first embodiment;FIG. 2 is a diagram illustrating a configuration example of a diagnosis support system according to a second embodiment;FIG. 3 is a diagram illustrating an example of a machine learning model for detecting a lesion region;FIG. 4 is a diagram illustrating a configuration example of a processing function provided by a diagnosis support apparatus;FIG. 5 is a diagram illustrating a data configuration example of a learning data set;FIG. 6 is a diagram for describing a generation process of a lesion identification model;FIG. 7 is a diagram for describing a lesion detection process using the lesion identification model;FIG. 8 is an example of a weight table indicating a correspondence relationship between an image feature amount and a weight coefficient;FIG. 9 is an example of a flowchart illustrating a procedure of a model generation process;FIG. 10 is an example of a flowchart illus