Search

US-12620065-B2 - Systems and methods for automatic cell identification using images of human epithelial tissue structure

US12620065B2US 12620065 B2US12620065 B2US 12620065B2US-12620065-B2

Abstract

Systems and methods for improving the image quality of images of epithelial tissue structures are disclosed. The systems include training a first cycle-GAN model and a second cycle-GAN model simultaneously, where the first cycle-GAN model is trained to remove noise from an image and the second cycle-GAN model is trained to learn the structure of the image. Additional systems and methods include deploying the trained cycle-GAN model to identify an unknown image segment and/or generate a protocol for following the identified skin care treatment recommendation for an identified image segment.

Inventors

  • Georgios N. Stamatas
  • Imane Lboukili
  • Xavier Descombes

Assignees

  • KENVUE BRANDS LLC
  • Inria (Institut national de recherche en informatique et en automatique)

Dates

Publication Date
20260505
Application Date
20240124

Claims (7)

  1. 1 . A computer-implemented method, comprising: training a first cycle-GAN model, wherein the first cycle-GAN model comprises a first generator GB 2 A, a second generator GA 2 B, a first discriminator DA, and a second discriminator DB 1 , and wherein training the first cycle-GAN model comprises: receiving, by the first generator GB 2 A, a real image as a first input, generating, by the first generator GB 2 A, a first synthetic image as a first output, receiving, by the first discriminator DA, the real image and the first synthetic image from the first generator GB 2 A, informing, by the first discriminator DA, a likelihood of each of the real image and the first synthetic image as being real or synthetic, receiving, by the second generator GA 2 B, the real image and the first synthetic image, generating, by the second generator GA 2 B, a first filtered synthetic image, receiving, by the second discriminator DB 1 , the real image and the first filtered synthetic image from the second generator GA 2 B, and informing, by the second discriminator DB 1 , a likelihood of each of the real image and the first filtered synthetic image as being real or synthetic, wherein the first cycle-GAN model learns noise through learning a translation from the first real image towards a first binary segmentation; and training a second cycle-GAN model, wherein the second cycle-GAN model comprises a first generator GC 2 B, a second generator GB 2 C, a first discriminator DC, and a second discriminator DB 2 , and wherein training the second cycle-GAN model comprises: receiving, by the first generator GC 2 B, a Gabor-filtered image as a second input, generating, by the first generator GC 2 B, a second synthetic image as a second output, wherein the second output depends on the first output, receiving, by the first discriminator DC, the Gabor-filtered and the second synthetic image from the first generator GC 2 B, informing, by the first discriminator DC, a likelihood of each of the Gabor-filtered and the second synthetic image as being Gabor-filtered or synthetic, receiving, by the second generator GB 2 C, the Gabor-filtered image and the second synthetic image, generating, by the second generator GB 2 C, a second filtered synthetic image, receiving, by the second discriminator DB 2 , the Gabor-filtered image and the second filtered synthetic image from the second generator GB 2 C, and informing, by the second discriminator DB 2 , a likelihood of each of the Gabor-filtered image and the second filtered synthetic image as being Gabor-filtered or synthetic, wherein the second cycle-GAN model learns a structure of at least one of the real image or the Gabor-filtered image through learning a translation from the Gabor-filtered image towards a second binary segmentation.
  2. 2 . The computer-implemented method of claim 1 , wherein at least one of the real image, the Gabor-filtered image, the first synthetic image, or the second synthetic image is an epithelial structure image.
  3. 3 . The computer-implemented method of claim 2 , wherein the learned structure learned by the second cycle-GAN includes a position and integrity of membranes of the epithelial structure image.
  4. 4 . The computer-implemented method of claim 3 , wherein: the position and integrity of membranes of the epithelial structure image include cell coordinates information, and learning the structure of at least one of the real image or the Gabor-filtered image includes extracting values of epithelial tissue structure parameters.
  5. 5 . The computer-implemented method of claim 4 , wherein the epithelial tissue structure parameters are selected from cell area, perimeter, cell density, distribution of nearest neighbors, and distributions of distances between neighbors.
  6. 6 . The computer-implemented method of claim 1 , wherein the real image are acquired by non-invasive or minimally invasive imaging with cellular resolution.
  7. 7 . The computer-implemented method of claim 6 , wherein the non-invasive or minimally invasive imaging is selected from a group consisting of reflectance confocal microscopy, fluorescence confocal microscopy, fluorescence lifetime microscopy, multiphoton fluorescence microscopy, second harmonic generation microscopy, chemiluminescence imaging, photoacoustic microscopy, magnetic resonance imaging, optical coherence tomography, line-field optical coherence tomography and photo-thermal microscopy.

Description

CROSS-REFERENCE TO RELATED APPLICATIONS This application claims priority under 35 U.S.C. § 119(e) to U.S. Provisional Application No. 63/447,374 filed on Feb. 22, 2023, which is incorporated herein by reference in its entirety. TECHNICAL FIELD The present disclosure relates generally to systems and methods for analysis of images of human epithelial tissue structure. More specifically, the present disclosure relates to automatically identifying cells and features of cells by executing parallel models that remove noise from images while maintaining the position and integrity of cell membranes. BACKGROUND Human skin is a complex, multi-layered and dynamic system that provides a protective covering defining the interactive boundary between an organism and the environment. It is the largest organ of the body and is vitally important to our health. Skin comprises three principal layers, the epidermis, the basement membrane also known as the dermis and a layer of subcutaneous fat also known as the hypodermis. The epidermis is in contact with the external environment and protects the body from dehydration and external aggression, whether chemical, mechanical, physical, or infectious, prevents the loss of water, and maintains internal homeostasis. The dermis provides the epidermis with mechanical support and is a nurturing element of the skin. Accurate segmentation and identification of epidermal cells on reflectance confocal microscopy (RCM) images is important in the study of epidermal architecture and topology of both healthy and diseased skin. Current methods of analysis of RCM images include performing this process manually, which is time-consuming, subject to human error and inter-expert interpretation, or are hindered by low image quality due to noise and heterogeneity, which fail to either accurately recognize and localize cells and/or the morphological features of the cells, such as keratinocytes. Thus, there is a need for improved systems and methods that identify cells using images of human epithelial tissue structures. SUMMARY This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter. Various examples of the present disclosure provide systems and methods as described herein. In one example, a computer-implemented method is provided. The computer-implemented method includes training a first cycle-GAN model, wherein the first cycle-GAN model comprises a first generator GB2A, a second generator GA2B, a first discriminator DA, and a second discriminator DB1. Training the first cycle-GAN model comprises receiving, by the first generator GB2A, a real image as a first input, generating, by the first generator GB2A, a first synthetic image as a first output, receiving, by the first discriminator DA, the real image and the first synthetic image from the first generator GB2A, informing, by the first discriminator DA, a likelihood of each of the real image and the first synthetic image as being real or synthetic, receiving, by the second generator GA2B, the real image and the first synthetic image, generating, by the second generator GA2B, a first filtered synthetic image, receiving, by the second discriminator DB1, the real image and the first filtered synthetic image from the second generator GA2B, and informing, by the second discriminator DB1, a likelihood of each of the real image and the first filtered synthetic image as being real or synthetic, wherein the first cycle-GAN model learns noise through learning a translation from the first real image towards binary segmentations. The computer-implemented method further includes training a second cycle-GAN model, wherein the second cycle-GAN model comprises a first generator GC2B, a second generator GB2C, a first discriminator DC, and a second discriminator DB2. Training the second cycle-GAN model comprises receiving, by the first generator GC2B, a Gabor-filtered image as a second input, generating, by the first generator GC2B, a second synthetic image as a second output, receiving, by the first discriminator DC, the Gabor-filtered and the second synthetic image from the first generator GC2B, informing, by the first discriminator DC, a likelihood of each of the Gabor-filtered and the second synthetic image as being Gabor-filtered or synthetic, receiving, by the second generator GB2C, the Gabor-filtered image and the second synthetic image, generating, by the second generator GB2C, a second filtered synthetic image, receiving, by the second discriminator DB2, the Gabor-filtered image and the second filtered synthetic image from the second generator GB2C, and informing, by the second discriminator DB2, a likelihood of each of the Gabor-filtered image and the second filtered synthet