CN-122003719-A - Image-based longitudinal analysis
Abstract
Methods and apparatus (including devices and systems, such as intraoral scanners and software) for analyzing images of a subject's teeth. In some examples, the methods may include examining the same tooth or tooth region across different times and/or different imaging modalities (including, but not limited to, visible light, infrared, fluorescence, etc.). Methods and apparatus for automatically or semi-automatically adjusting multiple imaging parameters simultaneously between multiple images of an area using a single master control, the images being captured with different imaging modalities and/or at different times are also described herein.
Inventors
- Maya Moses
- OVER SAFEL
- Shay Ayar
- M. Lelouch
- R Katz
Assignees
- 阿莱恩技术有限公司
Dates
- Publication Date
- 20260508
- Application Date
- 20240812
- Priority Date
- 20230811
Claims (20)
- 1. A method, the method comprising: Receiving or accessing a plurality of images of a first region of an intraoral scan of a subject's teeth, wherein each of the plurality of images is taken using a different imaging modality, and wherein each of the plurality of images has a corresponding uncorrected imaging parameter set; automatically correcting each of the plurality of images independently using one or more automatic correction modules to generate a corresponding set of corrected imaging parameters for each of the plurality of images; Providing a master input having a first uncorrected position, a second auto-corrected position, and a third overcorrected position, wherein the second auto-corrected position is between the first uncorrected position and the third overcorrected position; Receiving a user-selected value from the master input between the first location and the third location; for each image, determining an adjusted displayed imaging parameter set for each image, each of the adjusted displayed imaging parameters being determined by scaling an uncorrected imaging parameter set for each image relative to a corresponding corrected imaging parameter set based on the user-selected values from the master input, and Each image of the plurality of images is displayed using the adjusted display imaging parameter set for each image.
- 2. The method of claim 1, wherein scaling the uncorrected imaging parameter set for each image relative to the corresponding corrected imaging parameter set comprises shifting uncorrected imaging parameters for each image between 0% and 200% of the difference between the uncorrected imaging parameter set and the corresponding corrected imaging parameter set for each image based on the user-selected values from the master input.
- 3. The method of claim 1, wherein the adjusted display imaging parameter set for each image is uncorrected relative to the uncorrected imaging parameter set when the user-selected value from the master input corresponds to the first uncorrected position.
- 4. The method of claim 1, wherein the adjusted display imaging parameter set for each image is set to the corresponding corrected imaging parameter set when the user-selected value from the master input corresponds to the second auto-correct position.
- 5. The method of claim 1, wherein the plurality of uncorrected parameters includes contrast and brightness.
- 6. The method of claim 1, wherein the different imaging modalities include two or more of white light illumination, near infrared illumination, single wavelength illumination, and fluorescent illumination.
- 7. The method of claim 1, further comprising iteratively determining the adjusted display imaging parameter set for each image and displaying each image of the plurality of images in real-time using the adjusted display imaging parameter set as the master control input is adjusted by a user.
- 8. The method of claim 1, wherein providing the master input comprises providing one or more of a slider, a knob, or a dial.
- 9. The method of claim 1, wherein receiving the user-selected value from the master input comprises receiving a continuous value.
- 10. The method of claim 1, wherein automatically correcting each of the plurality of images using the one or more automatic correction modules comprises one or more modules configured to perform one or more of histogram equalization, gamma correction, color balance correction, white balance correction, sharpening filtering, noise reduction, contrast stretching, saturation adjustment, and blur elimination.
- 11. The method of claim 1, further comprising receiving a user-selected position relative to the three-dimensional 3D model of the subject's teeth, the user-selected position corresponding to the intraoral scanned first region, wherein the 3D model of the subject's teeth is derived from an intraoral scan of the subject's teeth.
- 12. The method of claim 1, wherein the plurality of images of the intraoral scanned first region comprise a first white light image and a second near infrared NIR image.
- 13. The method of claim 1, further comprising allowing the user to switch between using the master input and a plurality of individual control inputs corresponding to each of the imaging parameters of each of the plurality of images.
- 14. The method of claim 1, wherein displaying each image of the plurality of images comprises displaying each image side-by-side.
- 15. The method of claim 1, wherein the method is performed by an intraoral scanner.
- 16. A method, the method comprising: Receiving or accessing a first image of a first region of an intraoral scan of a subject's teeth, wherein the first image is taken using a first imaging modality, the first image having a first plurality of uncorrected imaging parameters; Receiving or accessing a second image of the first region of the intraoral scan of the subject's teeth, wherein the second image is taken with a second imaging modality, the second image having a second plurality of uncorrected imaging parameters; automatically correcting the first image using one or more automatic correction modules to generate a first corrected imaging parameter set; automatically correcting the second image using the one or more automatic correction modules to generate a second set of correction parameters; Providing a master control input having a first uncorrected position, a second auto-corrected position, and a third overcorrected position, wherein the second auto-corrected position is between the first uncorrected position and the third overcorrected position; Receiving a user-selected value from the master input, the user-selected value being between the first location and the third location; Displaying the first image using a first plurality of display imaging parameters determined by scaling the first plurality of uncorrected imaging parameters relative to the first set of corrected imaging parameters based on the user-selected values from the master input, and The second image is displayed using a second plurality of display imaging parameters determined by scaling the second plurality of uncorrected imaging parameters relative to the second set of corrected imaging parameters based on the user-selected values from the master input.
- 17. A system, comprising: One or more processors; A memory coupled to the one or more processors, the memory storing computer program instructions that, when executed by the one or more processors, perform a computer-implemented method comprising: Receiving or accessing a plurality of images of a first region of an intraoral scan of a subject's teeth, wherein each of the plurality of images is taken using a different imaging modality, and wherein each of the plurality of images has a corresponding uncorrected imaging parameter set; automatically correcting each of the plurality of images independently using one or more automatic correction modules to generate a corresponding set of corrected imaging parameters for each of the plurality of images; Providing a master input having a first uncorrected position, a second auto-corrected position, and a third overcorrected position, wherein the second auto-corrected position is between the first uncorrected position and the third overcorrected position; Receiving a user-selected value from the master input between the first location and the third location; For each image, determining an adjusted displayed imaging parameter set for each image, each of the adjusted displayed imaging parameters being determined by scaling an uncorrected imaging parameter set for each image relative to a corresponding corrected imaging parameter set based on the user-selected values from the master input, and Each image of the plurality of images is displayed using the adjusted display imaging parameter set for each image.
- 18. The system of claim 17, wherein scaling the uncorrected imaging parameter set for each image relative to the corresponding corrected imaging parameter set comprises shifting uncorrected imaging parameters for each image between 0% and 200% of the difference between the uncorrected imaging parameter set and the corresponding corrected imaging parameter set for each image based on the user-selected values from the master input.
- 19. The system of claim 17, wherein the adjusted display imaging parameter set for each image is uncorrected relative to the uncorrected imaging parameter set when the user-selected value from the master input corresponds to the first uncorrected position.
- 20. The system of claim 17, wherein the adjusted display imaging parameter set for each image is set to the corresponding corrected imaging parameter set when the user-selected value from the master input corresponds to the second auto-correct position.
Description
Image-based longitudinal analysis Priority claiming The present patent application claims priority from U.S. provisional patent application No. 63/519,222 entitled "IMAGE-BASED LONGITUDINAL ANALYSIS (longitudinal analysis based on IMAGEs)" filed on day 8, 2023, and priority from U.S. provisional patent application No. 63/566,208 entitled "APPARATUSES AND METHODS FOR ANALYSIS OF INTRAORAL SCANS (apparatus and method for analyzing intraoral scanning)" filed on day 15, 2024, each of which is incorporated herein by reference in its entirety. Background Many dental and orthodontic procedures can benefit from accurate imaging (including 2D and 3D) of dentitions and intraoral cavities of a subject (e.g., a patient). It would be helpful to provide accurate visual depictions of one or more regions of a tooth taken at different times to improve the diagnosis and/or treatment of the tooth. In particular, it would be extremely useful to provide a description of the surface of the tooth (and in some cases the internal structure of the tooth), including a description of enamel and dentin, and a description of structures that may be present on caries and teeth, such as dental appliances, fillings, prostheses (pontic), and the like, over time, including over the course of treatment. Dental imaging using intraoral scanning has become increasingly popular in recent years, and has made patient records available to provide detailed descriptions of a patient's teeth at a particular point in time. However, even if images of the patient's teeth are taken at a later time using the same intraoral scanner or the same type of intraoral scanner, it may be difficult or impossible to provide longitudinal images of the same area of teeth. For example, it has proven very difficult to generate accurate and smooth delayed images/videos using patient scans taken by an intraoral scanner, as the intraoral scanner may be held by a user and may take many images at different positions (e.g., different distances and angles) relative to the patient's teeth, so that it is almost impossible to show the same view in the same orientation. This makes proper comparison very difficult, especially in the case of comparing two-dimensional (2D) images. The prior art techniques for generating a time-lapse image or representation are algorithmically and computationally intensive. Furthermore, it would be useful to provide a longitudinal comparison between two or more 2D images (e.g., a comparison over time) and/or between one or more 2D images and a 3D model, which images and models may be obtained at different times. It would therefore be of great benefit to provide methods and apparatus for providing a direct comparison between corresponding regions of one or more teeth of a subject over time (e.g., longitudinally) in an efficient and cost-effective manner. Methods and apparatus are described herein that can address these issues. Disclosure of Invention Described herein are methods and apparatus (including devices and systems, such as intraoral scanners and software) for comparing images of the same tooth or tooth region (in some examples, images across different times). The methods and apparatus may be used to compare any one or more imaging types, including but not limited to visible light, infrared, fluorescence, and the like. Comparison of the same tooth or tooth region at different times may be referred to as longitudinal analysis. In general, longitudinal follow-up is an important part of dental practice. These methods and apparatus allow for easy comparison of dental images taken at different points in time. The image may be a visible light image showing the tooth surface, or may be a Near Infrared (NIR) image that penetrates the enamel and may show caries lesions. Since lesions can be detected by NIR even at an early stage, the disclosed invention allows for close follow-up of early lesions. Thus, the methods and apparatus described herein allow for comparison of images of the same region in the jaw over time. In some cases, once a region of interest (Region of Interest, ROI) in the jaw is selected (manually or automatically), the methods and apparatus may select the best image showing the ROI from each time. The image may be transformed to appear to be taken from the same camera position. For example, the images may also be straightened and perspective corrected to bring the ROI in the same orientation/view angle in all images, allowing for easy comparison. The comparison image may be a comparison between 3D scan images (e.g., from an intraoral scanner), e.g., between different intraoral scans taken at different times, or may be a comparison between one or more 2D images (taken with a camera (including a patient camera, such as a smartphone)) and one or more 3D intraoral scans, and/or may be a comparison between two or more 2D images taken at different times (taken with a camera (including a patient camera, such as a smartphone)), which may use the