Search

EP-4740217-A1 - PERSONALIZED IMAGE MODIFICATION FOR CLINICAL SETTINGS

EP4740217A1EP 4740217 A1EP4740217 A1EP 4740217A1EP-4740217-A1

Abstract

A method comprises receiving a first image comprising first clinical information and first nonclinical information of a first patient and receiving a second image comprising second clinical information and second non-clinical information of the first patient or a second patient. The method further comprises generating a third image of the first patient based on the first image and the second image, wherein one of a) the third image is generated based on the first clinical information and the second non-clinical information such that the third image resembles a combination of the first clinical information and the second non-clinical information or b) the third image is generated based on the second clinical information and the first non-clinical information such that the third image resembles a combination of the second clinical information and the first non-clinical information.

Inventors

  • CETIN, Doruk
  • SOUCHET, ALAIN
  • MEYER, Eric Paul
  • HUBER, Niko Benjamin
  • STAMENKOVIC, PETAR
  • EL HELOU, Majed
  • ZÜND, FABIO

Assignees

  • Align Technology, Inc.

Dates

Publication Date
20260513
Application Date
20240703

Claims (20)

  1. 1. A method comprising: receiving a first image comprising first clinical information and first non-clinical information of a first patient; receiving a second image comprising second clinical information and second non-clinical information of the first patient or a second patient; and generating a third image of the first patient based on the first image and the second image, wherein one of a) the third image is generated based on the first clinical information and the second non-clinical information such that the third image resembles a combination of the first clinical information and the second non-clinical information or b) the third image is generated based on the second clinical information and the first non-clinical information such that the third image resembles a combination of the second clinical information and the first non-clinical information.
  2. 2. The method of claim 1 , wherein the first image was generated at a first time, and wherein the second image is of the first patient and was generated at a second time.
  3. 3. The method of claim 2, wherein the first time corresponds to a first patient visit of the first patient and the second time corresponds to a second patient visit of the first patient.
  4. 4. The method of claim 2, wherein the first clinical information comprises a first condition of a dentition of the first patient, and wherein the second clinical information comprises a second condition of the dentition of the first patient.
  5. 5. The method of claim 4, wherein the first condition of the dentition of the first patient corresponds to pre-treatment or a first stage of treatment, and wherein the second condition of the dentition of the first patient corresponds to a second stage of treatment.
  6. 6. The method of claim 1 , wherein the first image comprises a post-treatment image of the first patient after orthodontic treatment, wherein the second image comprises a pre-treatment image of the second patient, wherein the first clinical information of the first patient comprises a dentition of the first patient after the orthodontic treatment, and wherein the second non-clinical information comprises an appearance of the second patient other than a dentition of the second patient.
  7. 7. The method of claim 1 , wherein the first non-clinical information comprises a first appearance of the first patient, and wherein the second non-clinical information comprises a second appearance of the first patient or the second patient.
  8. 8. The method of claim 7, wherein: the first appearance comprises at least one of a first pose, a first facial angle, a first makeup application, a first gender, a first facial expression, first clothing, first lighting conditions, a first background, a first haircut, a first weight, a first hair color, a first skin tone, a first age, a first facial structure, or first wearable accessories; and the second appearance comprises at least one of a second pose, a second facial angle, a second makeup application, a second gender, a second facial expression, second clothing, second lighting conditions, a second background, a second haircut, a second weight, a second hair color, a second skin tone, a second age, a second facial structure, or second wearable accessories.
  9. 9. The method of claim 7, wherein the first image is a first facial image, wherein the second image is a second facial image, wherein the first appearance comprises a first facial appearance, and wherein the second appearance comprises a second facial appearance.
  10. 10. The method of claim 1 , wherein generating the third image comprises: processing the first image and the second image using one or more trained machine learning models that extract at least one of the first clinical information or the first non-clinical information from the first image and at least one of the second clinical information or the second non-clinical information from the second image, and use the extracted information to generate the third image.
  11. 11 . The method of claim 10, wherein the one or more trained machine learning models comprise a generative model.
  12. 12. The method of claim 10, wherein the one or more trained machine learning models comprise a plurality of machine learning models each trained to generate a different feature for the third image and an additional machine learning model trained to process outputs of the plurality of machine learning models to output a photorealistic combination of the outputs of the plurality of machine learning models.
  13. 13. The method of claim 1 , wherein the third image comprises a photorealistic and clinically relevant synthetic image showing the first patient with specified changes in an appearance of the first patient attributable to the second clinical information of the second patient.
  14. 14. The method of claim 1 , wherein the first non-clinical information and the second non-clinical information each comprises a plurality of properties, the method further comprising: receiving selection of at least one of a) one or more of the plurality of properties to use from the first non-clinical information or b) one or more of the plurality of properties to use from the second non- clinical information in generation of the third image, wherein the third image is generated in accordance with the selection.
  15. 15. The method of the claim 1 , further comprising: segmenting at least one of the first image or the second image into a plurality of features; and using segmentation information determined from the segmenting in the generating of the third image.
  16. 16. The method of claim 15, wherein segmenting at least one of the first image or the second image into the plurality of features comprises processing at least one of the first image or the second image by a trained machine learning model that outputs the segmentation information.
  17. 17. The method of claim 1 , further comprising: receiving a fourth image comprising third clinical information and third non-clinical information of the first patient, the second patient, or a third patient; wherein the third image is further generated from the third non-clinical information.
  18. 18. The method of claim 1 , wherein the second image is of the first patient, the method further comprising: receiving a temporal series of images of the first patient, each image in the temporal series of images comprising additional clinical information and additional non-clinical information of the first patient; and for each respective image in the temporal series of images, generating a modified version of the image comprising the first non-clinical information from the first image and the additional clinical information from the respective image.
  19. 19. The method of claim 1 , further comprising: receiving an input selecting values of one or more properties of non-clinical information to apply for the third image, wherein the values of the one or more properties of the non-clinical information do not correspond to properties of the first non-clinical information or the second non-clinical information; wherein the selected values of the one or more properties of the non-clinical information are reflected in the third image.
  20. 20. The method of claim 19, wherein the selected values of the one or more properties comprise at least one of a selected age, a selected weight, or a selected illumination condition.

Description

PERSONALIZED IMAGE MODIFICATION FOR CLINICAL SETTINGS TECHNICAL FIELD [0001] Embodiments of the present invention relate to the field of dentistry and, in particular, to the generation of patient images that are personalized to a dental patient. BACKGROUND [0002] When a dentist or orthodontist is engaging with current and/or potential patients, it is often helpful to show those patients images of before and after treatments of previous patients with similar malocclusions who have undergone successful treatment. However, often those previous patients look very different from the current patient. For example, the current patient may be a young woman and the previous patient may be an old man with a beard. Such differences can make it difficult for the current or potential patient to properly visualize how they might look after successful treatment due to these differences. The more differences there are between the current patient and the prior patients whose images are shown, the more distracting those differences can become, which detract from the current patient’s ability to visualize themselves with similar corrected teeth. SUMMARY [0003] Various example implementations are summarized. These example implementations are merely for illustration and should not be construed as limiting. [0004] In a 1st implementation, a method comprises: receiving a first image comprising first clinical information and first non-clinical information of a first patient; receiving a second image comprising second clinical information and second non-clinical information of the first patient or a second patient; and generating a third image of the first patient based on the first image and the second image, wherein one of a) the third image is generated based on the first clinical information and the second non-clinical information such that the third image resembles a combination of the first clinical information and the second non-clinical information or b) the third image is generated based on the second clinical information and the first non-clinical information such that the third image resembles a combination of the second clinical information and the first non-clinical information. [0005] A 2nd implementation may further extend the 1st implementation. In the 2nd implementation, the first image was generated at a first time, and wherein the second image is of the first patient and was generated at a second time. [0006] A 3rd implementation may further extend the 2nd implementation. In the 3rd implementation, the first time corresponds to a first patient visit of the first patient and the second time corresponds to a second patient visit of the first patient. [0007] A 4th implementation may further extend the 2nd implementation. In the 4th implementation, the first clinical information comprises a first condition of a dentition of the first patient, and wherein the second clinical information comprises a second condition of the dentition of the first patient. [0008] A 5th implementation may further extend the 4th implementation. In the 5th implementation, the first condition of the dentition of the first patient corresponds to pre-treatment or a first stage of treatment, and wherein the second condition of the dentition of the first patient corresponds to a second stage of treatment. [0009] A 6th implementation may further extend any of the 1 st through 5th implementations. In the 6th implementation, the first image comprises a post-treatment image of the first patient after orthodontic treatment, wherein the second image comprises a pre-treatment image of the second patient, wherein the first clinical information of the first patient comprises a dentition of the first patient after the orthodontic treatment, and wherein the second non-clinical information comprises an appearance of the second patient other than a dentition of the second patient. [0010] A 7th implementation may further extend any of the 1 st through 6th implementations. In the 7th implementation, the first non-clinical information comprises a first appearance of the first patient, and wherein the second non-clinical information comprises a second appearance of the first patient or the second patient. [0011] An 8th implementation may further extend the 7th implementation. In the 8th implementation, the first appearance comprises at least one of a first pose, a first facial angle, a first makeup application, a first gender, a first facial expression, first clothing, first lighting conditions, a first background, a first haircut, a first weight, a first hair color, a first skin tone, a first age, a first facial structure, or first wearable accessories; and the second appearance comprises at least one of a second pose, a second facial angle, a second makeup application, a second gender, a second facial expression, second clothing, second lighting conditions, a second background, a second haircut, a second weight, a second hair color, a second skin tone, a seco