Search

EP-4742264-A1 - METHOD AND APPARATUS FOR PRESENTING ORAL HEALTH EXAMINATION INFORMATION, AND DEVICE AND STORAGE MEDIUM

EP4742264A1EP 4742264 A1EP4742264 A1EP 4742264A1EP-4742264-A1

Abstract

The present application provides a method, an apparatus, a device, and a storage medium for displaying oral health examination information. The method includes: displaying a plurality of tooth images on a visualization interface, wherein a tooth image represents a two-dimensional image of a three-dimensional model from one of perspectives of the three-dimensional model; in response to an operation instruction of a user for a specified tooth image, rotating the three-dimensional model to a perspective corresponding to the specified tooth image and displaying the three-dimensional model on the visualization interface; wherein the specified tooth image includes a tooth image displaying a suspected lesion area, and the perspective corresponding to the specified tooth image is determined by pose parameters of the three-dimensional model bound to the tooth image. Through the above method, a patient can directly see a position of the suspected lesion area on the three-dimensional model from the visualization interface, thereby allowing the patient to intuitively understand the oral health examination information through the visualization interface.

Inventors

  • ZHAO, Xiaobo
  • NI, Kaijia
  • YAN, Kexin
  • MA, CHAO
  • ZHANG, HUIQUAN
  • CHEN, XIAOJUN
  • LU, YAN
  • JIANG, Tengfei

Assignees

  • Shining 3D Tech Co., Ltd.

Dates

Publication Date
20260513
Application Date
20240705

Claims (20)

  1. A method for displaying oral health examination information, characterized in that the method comprises: displaying a plurality of tooth images on a visualization interface, the tooth image representing a two-dimensional image of a three-dimensional model from one of perspectives of the three-dimensional model; in response to an operation instruction of a user for a specified tooth image, rotating the three-dimensional model to a perspective corresponding to the specified tooth image and displaying the three-dimensional model on the visualization interface; wherein the specified tooth image comprises a tooth image displaying a suspected lesion area, and the perspective corresponding to the specified tooth image is determined by pose parameters of the three-dimensional model bound to the tooth image.
  2. The method according to claim 1, wherein at least one control is displayed on the visualization interface, and the at least one control comprises: a first control, for enabling the user to trigger an operation to display a plurality of disease labels on the three-dimensional model; wherein the plurality of disease labels are configured to indicate a plurality of suspected lesion areas on the three-dimensional model, and information of the disease label represents an oral disease name corresponding to the suspected lesion area; or, a second control, for enabling the user to trigger an operation to switch between display states of the three-dimensional model; the display states comprising an open state and a closed state; or, a third control, for enabling the user to trigger an operation to display a plurality of orientation labels on the three-dimensional model; wherein the plurality of orientation labels are configured to indicate a plurality of tooth areas on the three-dimensional model, and information of the orientation label represents a name of a tooth area; or, a fourth control, for enabling the user to trigger an operation to edit oral disease information corresponding to the suspected lesion area; wherein a sub-control comprised in the fourth control is configured to trigger an operation to update tooth position number information of the suspected lesion area; a tooth number represents a position of a tooth in the three-dimensional model.
  3. The method according to claim 2, further comprising: in response to a user's rotation operation on the three-dimensional model, adjusting a pose of labels on the three-dimensional model according to a change amount of pose parameters during a rotation of the three-dimensional model, so that relative positions between the labels and the three-dimensional model remain consistent; wherein the labels at least comprise disease labels and orientation labels; after the rotation operation on the three-dimensional model is completed, in response that a label is occluded by the three-dimensional model determined based on a current coordinate point set of the three-dimensional model and a current coordinate point set of the labels, reducing an opacity of the label, and/or adjusting a pose of the label.
  4. The method according to claim 2, wherein the disease labels are pre-bound to tooth images displaying suspected lesion areas, and the method further comprises: in response to an operation instruction of the user for a disease label, displaying oral disease information corresponding to the suspected lesion area, and/or selecting the tooth image bound to the disease label, and ranking a display order of the tooth image bound to the disease label before other displayed tooth images; wherein the oral disease information comprises at least any one of: an oral disease name, tooth position number information of the suspected lesion area, a symptom description, a treatment plan, and an educational video.
  5. The method according to claim 1, wherein a fifth control is displayed on the visualization interface to switch to a mode that allows the user to make a mark on the three-dimensional model; the specified tooth image further comprises a tooth image displaying the mark; the method further comprises: under a condition that the user has marked the three-dimensional model, in response to a user's rotation operation on the three-dimensional model, taking a screenshot of the marked three-dimensional model before rotation and saving the screenshot as a tooth image displaying the mark; wherein the tooth image displaying the mark is bound with pose parameters and mark information of the marked three-dimensional model before rotation; when responding to the user's operation instruction for the specified tooth image, the method further comprises: determining that the specified tooth image is a tooth image displaying mark information; rotating the three-dimensional model to the perspective corresponding to the specified tooth image and displaying the three-dimensional model on the visualization interface further comprises: displaying the mark information on the visualization interface.
  6. The method according to claim 1, further comprising: classifying multi-source data and displaying the multi-source data on the visualization interface; wherein the multi-source data is imported by the user and/or obtained from a server, and categories of the multi-source data comprise at least any one of: the three-dimensional model, screenshots of the three-dimensional model, texture images, oral photographs, facial photographs, near-infrared images, X-ray images, computed tomography (CT) images, and cone beam computer tomography (CBCT) images.
  7. The method according to claim 1, applied to a client, further comprising: in response to a template acquisition instruction carrying a template ID; acquiring a target oral health examination report template from a server based on the template ID, the server being configured to provide an oral health examination report template set, the oral health examination report template set comprising a plurality of oral health examination report templates generated for a same oral disease, the oral health examination report template defining a candidate name for an oral disease, the candidate name being a generic name or a custom name; acquiring oral health examination information, the oral health examination information being obtained based on a result of recognizing the three-dimensional model by a pre-trained oral detection model, and/or based on a result of recognizing the tooth image by a user, the oral health examination information comprising at least one suspected lesion area and an oral disease name corresponding to each suspected lesion area; generating an oral health examination report on the target oral health examination report template based on the oral health examination information, to display the oral disease name as the candidate name defined in the target oral health examination report template.
  8. The method according to claim 7, wherein acquiring the oral health examination information based on the result of recognizing the three-dimensional model by the pre-trained oral detection model comprises: identifying, based on the pre-trained oral detection model, at least one lesion area of the three-dimensional model, and obtaining a confidence set corresponding to several oral diseases for each suspected lesion area; for each suspected lesion area, based on the disease confidence set, or based on the disease confidence set and an oral disease classification order preset in the target oral health examination report template, determining a candidate oral disease list corresponding to the suspected lesion area; determining the oral disease name corresponding to the suspected lesion area from the candidate oral disease list according to a preset rule and/or in response to a user's selection operation; wherein acquiring the oral health examination information based on the result of recognizing the tooth image by the user comprises: determining at least one suspected lesion area annotated by the user on the tooth image, and determining the oral disease name corresponding to each suspected lesion area according to an oral disease selected by the user from a preset oral disease list; wherein the preset oral disease list is determined according to an oral disease classification order preset in the target oral health examination report template; the tooth image comprises at least any one of: a screenshot of a three-dimensional tooth model in the three-dimensional model, a tooth texture image, a tooth near-infrared image, a tooth X-ray image, a CT image, an oral CBCT image, an oral photograph, a facial photograph, a facial texture image, and a screenshot of a three-dimensional facial model in the three-dimensional model.
  9. The method according to claim 8, wherein determining the candidate oral disease list corresponding to the suspected lesion area based on the disease confidence set and the oral disease classification order preset in the target oral health examination report template comprises: from the disease confidence set, determining several oral diseases corresponding to top N highest disease confidences as elements of the candidate oral disease list; wherein N is a positive integer and is less than a preset length of the candidate oral disease list; based on the oral disease classification order preset in the target oral health examination report template, determining several oral diseases of a same category corresponding to a first candidate oral disease as elements of the candidate oral disease list; wherein the first candidate oral disease is the oral disease corresponding to the Nth highest disease confidence in the disease confidence set.
  10. The method according to claim 7, wherein the oral health examination report template further defines oral disease information bound to the candidate name; generating the oral health examination report on the target oral health examination report template based on the oral health examination information comprises: determining the candidate name corresponding to the oral disease name in the target oral health examination report template, and acquiring the oral disease information bound to the candidate name; generating the oral health examination report on the target oral health examination report template based on the at least one suspected lesion area, the oral disease name corresponding to each suspected lesion area, and the oral disease information.
  11. The method according to claim 8, further comprising: uploading the oral health examination report to the server, so that the user can perform a desensitization processing on patient information in the oral health examination report on the server, analyze the oral health examination information in the oral health examination report, and/or train the oral detection model based on the oral health examination information.
  12. The method according to claim 1, applied to a server, further comprising: receiving a template acquisition instruction carrying a template ID; sending a target oral health examination report template to a client based on the template ID, so that the client generates an oral health examination report on the target oral health examination report template based on oral health examination information; wherein the oral health examination information comprises at least one suspected lesion area and an oral disease name corresponding to each suspected lesion area; the target oral health examination report template belongs to an oral health examination report template set, the oral health examination report template set comprising a plurality of oral health examination report templates generated for a same oral disease, the oral health examination report template defining a candidate name for an oral disease, the candidate name being a generic name or a custom name; the oral disease name is displayed in the oral health examination report as the candidate name defined in the target oral health examination report template.
  13. The method according to claim 12, wherein the oral health examination report template further defines oral disease information bound to the candidate names; generating the oral health examination report on the target oral health examination report template based on the oral health examination information comprises: determining the candidate name corresponding to the oral disease name in the target oral health examination report template, and acquiring the oral disease information bound to the candidate name; generating the oral health examination report on the target oral health examination report template based on the at least one suspected lesion area, the oral disease name corresponding to each suspected lesion area, and the oral disease information.
  14. The method according to claim 12, wherein the oral health examination information is obtained based on a result of recognizing the three-dimensional model by a pre-trained oral detection model, and/or based on a result of recognizing the tooth image by a user; the method further comprises: receiving the oral health examination report uploaded by the client, performing a desensitization processing on patient information in the oral health examination report, analyzing the oral health examination information in the oral health examination report, and/or training the oral detection model based on the oral health examination information.
  15. The method according to claim 1, wherein before responding to the user's operation instruction for the specified tooth image, the method further comprises: in response to a screenshot instruction issued by a user, capturing an image of a specified area in the visualization interface, the specified area comprising at least a portion of the three-dimensional model; binding the image of the specified area with pose parameters of the three-dimensional model, and saving the image of the specified area into an image list; the image list comprises a plurality of images, and the image of the specified area represents a two-dimensional image of the three-dimensional model from one of perspectives of the three-dimensional model.
  16. The method according to claim 15, wherein after capturing the image of the specified area in the visualization interface, the method further comprises: shrinking the captured image and displaying the captured image in a preset area of the visualization interface; wherein the operation instruction for the captured image is an operation instruction triggered by the user for the shrunken captured image within a predetermined time.
  17. The method according to claim 15, wherein the specified tooth image comprises several images displaying areas of interest; a capture timing for the image displaying the area of interest comprises a moment when the area of interest exists in the three-dimensional model is determined based on a semantic recognition result of a communication record regarding the three-dimensional model.
  18. The method according to claim 17, further comprising: based on the semantic recognition result, determining description information of the area of interest; based on the image displaying the area of interest and the corresponding description information of the area of interest, generating an examination report.
  19. The method according to claim 15, further comprising: in response to an operation instruction of the user for the captured image, displaying at least one of following controls on the visualization interface: an eleventh control, for enabling the user to trigger an operation for a label management on the captured image; a twelfth control, for enabling the user to trigger an operation to switch to a mode that allows the user to make a mark on the captured image; a thirteenth control, for enabling the user to trigger an operation to switch a display state of the image list; wherein the twelfth control comprises at least one of following sub-controls: a first sub-control, for enabling the user to trigger an operation to adjust a color of the mark; a second sub-control, for enabling the user to trigger an operation to adjust a width of the mark; a third sub-control, for enabling the user to trigger an operation to adjust a timeliness of the mark.
  20. An apparatus for displaying oral health examination information, characterized in that the apparatus comprises: an image display module, configured to display a plurality of tooth images on a visualization interface, wherein the tooth image represents a two-dimensional image of a three-dimensional model from one of perspectives of the three-dimensional model; a model rotation module, configured to, in response to an operation instruction of the user for a specified tooth image, rotate the three-dimensional model to a perspective corresponding to the specified tooth image and display the three-dimensional model on the visualization interface; wherein the specified tooth image comprises a tooth image displaying a suspected lesion area, and the perspective corresponding to the specified tooth image is determined by pose parameters of the three-dimensional model bound to the tooth image.

Description

The present application claims priority to Chinese Patent Application No. 202310838151.3, filed with the China National Intellectual Property Administration on July 7, 2023, entitled "METHOD AND APPARATUS FOR GENERATING ORAL EXAMINATION REPORT, COMPUTER DEVICE"; Chinese Patent Application No. 202310838383.9, filed with the China National Intellectual Property Administration on July 7, 2023, entitled "METHOD AND APPARATUS FOR DISPLAYING ORAL EXAMINATION INFORMATION, COMPUTER DEVICE"; Chinese Patent Application No. 202311296139.0, filed with the China National Intellectual Property Administration on September 28, 2023, entitled "METHOD, APPARATUS, DEVICE, AND MEDIUM FOR DISPLAYING ORAL HEALTH EXAMINATION INFORMATION"; Chinese Patent Application No. 202311289263.4, filed with the China National Intellectual Property Administration on September 28, 2023, entitled "METHOD AND APPARATUS FOR GENERATING ORAL HEALTH EXAMINATION REPORT, COMPUTER DEVICE"; and Chinese Patent Application No. 202311789536.1, filed with the China National Intellectual Property Administration on December 22, 2023, entitled "SCREENSHOT MANAGEMENT METHOD, APPARATUS, COMPUTER DEVICE AND MEDIUM". The entire contents of all the aforementioned applications are incorporated herein by reference. TECHNICAL FIELD The present application relates to the field of oral medicine technology, and in particular, to a method for displaying oral health examination information, an apparatus, a device, and a storage medium. BACKGROUND In some scenarios, when users communicate based on two-dimensional images, one party may lack corresponding professional knowledge, making it difficult to understand the content explaining the two-dimensional images, resulting in low communication efficiency. For example, in medical scenarios, doctors typically communicate with patients using paper-based medical images. However, due to patients' lack of certain professional knowledge, they have difficulty understanding the doctor's explanation of the medical images. Doctors cannot well understand the patient's needs and expectations, and it is difficult to help patients understand treatment plans and expected outcomes. Doctor-patient communication can help doctors better understand the patient's needs and expectations, and can also help patients better understand treatment plans and expected outcomes. In related technologies, when doctors communicate the condition with patients after performing an oral examination, due to patients' lack of certain professional knowledge, they have difficulty understanding oral health examination information through the doctor's verbal description. Furthermore, generating an oral health examination report after performing an oral health examination for a patient is an important medical service provided by medical institutions. The oral health examination report usually records the patient's basic information and condition information. Doctors can use it to track and manage the patient's condition, better understand the patient's condition and treatment effectiveness, in order to further develop more effective treatment plans. In related technologies, an oral health examination report is generated based on a fixed oral health examination report template. The names of various oral diseases displayed on this oral health examination report are all generic names, which cannot meet the customized needs of medical institutions in different regions. SUMMARY In view of the above, the present application provides a method for displaying oral health examination information, an apparatus, a device, and a storage medium to address deficiencies in related technologies. A first aspect of the present application provides a method for displaying oral health examination information. The method includes: displaying a plurality of tooth images on a visualization interface, the tooth image representing a two-dimensional image of a three-dimensional model from one of perspectives of the three-dimensional model; in response to an operation instruction of a user for a specified tooth image, rotating the three-dimensional model to a perspective corresponding to the specified tooth image and displaying the three-dimensional model on the visualization interface; wherein the specified tooth image comprises a tooth image displaying a suspected lesion area, and the perspective corresponding to the specified tooth image is determined by pose parameters of the three-dimensional model bound to the tooth image. A first aspect of the present application provides a method for displaying oral health examination information. The method further includes: in response to a template acquisition instruction carrying a template ID; acquiring a target oral health examination report template from a server based on the template ID, the server being configured to provide an oral health examination report template set, the oral health examination report template set comprising a plurality of oral