Search

EP-3586306-B1 - METHOD AND APPARATUS FOR PROCESSING HISTOLOGICAL IMAGE CAPTURED BY MEDICAL IMAGING DEVICE

EP3586306B1EP 3586306 B1EP3586306 B1EP 3586306B1EP-3586306-B1

Inventors

  • SULLIVAN, Kenneth Mark
  • KANG, Jin Man

Dates

Publication Date
20260506
Application Date
20180221

Claims (9)

  1. A method, performed by a computer, for processing a histological image captured by a medical imaging device, the method comprising: receiving the histological image including at least one type of tissue; determining, by a first autoencoder, a candidate type of tissue in the histological image; identifying, by the first autoencoder, a target region corresponding to the candidate type of tissue in the histological image; identifying at least one target histological image corresponding to the target region in the histological image based on a predictive model associating one or more sample histological images with one or more sample target histological images; and applying one or more display characteristics associated with the identified at least one target histological image to the histological image, wherein a first unsupervised learning model is generated by training the first autoencoder based on a first set of sample histological images, and a second unsupervised learning model is generated by training a second autoencoder based on a second set of sample target histological images, wherein the predictive model is generated based on the first unsupervised learning model and the second unsupervised learning model, wherein one or more anatomical locations of M sample histological images in the first set of sample histological images are aligned to match one or more anatomical locations of N sample target histological images in the second set of sample target histological images, M and N being positive integers, wherein the predictive model comprises data regarding one or more features indicative of one or more display characteristics and is trained by associating one or more features from the N sample target histological images with one or more features from the M sample histological images, wherein the identified at least one target histological image is at least one image of H&E (hematoxylin and eosin) stain, wherein the histological image is captured by an Optical Coherence Tomography (OCT) device, and wherein the histological image is modified to appear in a form of H&E stain.
  2. The method of Claim 1, further comprising generating a modified image of the histological image including the applied display characteristics.
  3. The method of Claim 1, wherein each of the one or more sample histological images comprises a first set of patches, and each of the one or more sample target histological images comprises a second set of patches, wherein the first set of patches is associated with the second set of patches in the predictive model, and wherein applying the one or more display characteristics comprises modifying a plurality of patches in the received histological image based on a set of patches in the identified at least one target histological image.
  4. The method of Claim 1, wherein identifying the target region corresponding to the candidate type of tissue comprises identifying a plurality of regions comprising the target region in the histological image, wherein each of the one or more sample histological images comprises a first set of regions, and each of the one or more sample target histological images comprises a second set of regions, wherein the first set of regions is associated with the second set of regions in the predictive model, and wherein applying one or more display characteristics comprises modifying the plurality of regions in the received histological image based on the second a set of regions in the identified at least one target histological image.
  5. The method of Claim 1, wherein the first unsupervised learning model is trained based on one or more features associated with the target region in the histological image.
  6. The method of Claim 1, wherein each of the first unsupervised learning model, the second unsupervised learning model, and the predictive model comprises a multilayer model defined by one or more model hyperparameters and one or more weights of an artificial neural network.
  7. An image processing device for processing a histological image captured by a medical imaging device, the image processing device comprising: a first autoencoder configured to: receive the histological image including at least one type of tissue; determine a candidate type of tissue in the histological image; and identify a target region corresponding to the candidate type of tissue in the histological image; and an image generating unit configured to: identify at least one target histological image corresponding to the target region in the histological image based on a predictive model associating one or more sample histological images with one or more sample target histological images; and apply one or more display characteristics associated with the identified at least one target histological image to the histological image, wherein a first unsupervised learning model is generated by training the first autoencoder based on a first set of sample histological images, and a second unsupervised learning model is generated by training a second autoencoder based on a second set of sample target histological images, wherein the predictive model is generated based on the first unsupervised learning model and the second unsupervised learning model, wherein one or more anatomical locations of M sample histological images in the first set of sample histological images are aligned to match one or more anatomical locations of N sample target histological images in the second set of sample target histological images, M and N being positive integers, and wherein the predictive model comprises data regarding one or more features indicative of one or more display characteristics and is trained by associating one or more features from the N sample target histological images with one or more features from the M sample histological images, wherein the identified at least one target histological image is at least one image of H&E (hematoxylin and eosin) stain, wherein the histological image is captured by an Optical Coherence Tomography (OCT) device, and wherein the histological image is modified to appear in a form of H&E stain.
  8. The image processing device of Claim 7, wherein the image generating unit is further configured to generate a modified image of the histological image including the applied display characteristics.
  9. The image processing device of Claim 7, wherein the first autoencoder is further configured to identify a plurality of regions comprising the target region in the histological image, wherein each of the one or more sample histological images comprises a first set of regions, and each of the one or more sample target histological images comprises a second set of regions, wherein the first set of regions is associated with the second set of regions in the predictive model, and wherein the image generating unit is further configured to modify the plurality of regions in the received histological image based on a set of regions in the identified at least one target histological image.

Description

TECHNICAL FIELD This application is based upon and claims the benefit of priority from prior U.S. Provisional Patent Application No. 62/461,490, filed Feb. 21, 2017, and U.S. Provisional Patent Application No. 62/563,751, filed Sep. 27, 2017. The present disclosure relates generally to processing a histological image for display, and more specifically, to generating a modified image of the histological image using a semi-supervised learning model. BACKGROUND ART In histology, H&E (hematoxylin and eosin) stain has been widely used in medical diagnosis. For example, for examining a suspected lesion such as a cancer in a body of a patient, a doctor may obtain a sample of the suspected lesion and conduct a predetermined procedure for generating a micrograph of H&E stain. The doctor may then view the micrograph of H&E stain under a microscope to diagnose a disease of the patient. To obtain a micrograph of H&E stain, a sample of a suspected lesion from a patient is typically sent to a histology laboratory. In addition, a series of predetermined procedures are performed for generating a micrograph of H&E stain. Such procedures usually take one or more days for obtaining the micrograph of H&E stain. In some cases, to provide timely treatment of a disease, a prompt diagnosis of the disease may be required during surgical operation. However, according to the above procedures, a disease for the suspected lesion may not be diagnosed instantly during the operation. Meanwhile, images such as CT (Computer Tomography), MRI (Magnetic Resonance Imaging) micrographs, etc. may be captured and used for firmly diagnosing a potential patient. However, capturing such a micrographic image may be relatively expensive for the potential patient. In addition, such CT and MRI devices may be too huge to be used for capturing a portion of a human body during an operation. That is, the devices may not be suitable for being located in or moving to an operating room during an operation. For instant and quick diagnosis purposes, images with a relatively low quality captured by a medical imaging device such as an OCT (Optical Coherence Tomography) device, etc. have been used for locating a suspected lesion in a patient. Such an image can be obtained cheaper than CT and/or MRI micrographs and can be generated more rapidly than the micrographs of H&E stain. However, such an image may not be provided with visibility suitable for accurately diagnosing a disease of one or more types of tissue in the image. US2013/317369 discloses a method for virtually staining biological tissues according to the prior art. SUMMARY A method, performed by a computer, for processing a histological image captured by a medical imaging device according to the present invention is claimed in claim 1. An image processing device for processing a histological image captured by a medical imaging device according to the present invention is claimed in claim 7. Claims 2-6 and 8-9 disclose preferred embodiments of the invention. Embodiments disclosed in the present disclosure relate to generate a modified image of a histological image captured by a medical imaging device using a predictive model that may be a semi-supervised learning model. According to one aspect of the present disclosure, a method, performed by a computer, for processing one or more histological images captured by a medical imaging device is disclosed. In this method, each of the histological images including at least one type of tissue is received, and at least one candidate type of tissue in each of the histological images is determined by a first autoencoder. At least one target region corresponding to the at least one candidate type of tissue in the histological image is identified by the first autoencoder. At least one target histological image corresponding to the target region in each of the histological images is identified based on a predictive model associating one or more sample histological images with one or more sample target histological images. One or more display characteristics associated with the identified target histological image or images is applied to the histological image. This disclosure also describes a device and a computer-readable medium relating to this method. One aspect of the present disclosure is related to a method, performed by a computer, for processing a histological image captured by a medical imaging device, the method comprising: receiving the histological image including at least one type of tissue; determining, by a first autoencoder, a candidate type of tissue in the histological image; identifying, by the first autoencoder, a target region corresponding to the candidate type of tissue in the histological image; identifying at least one target histological image corresponding to the target region in the histological image based on a predictive model associating one or more sample histological images with one or more sample target histological images; and applying