Search

US-20260123998-A1 - IMAGE-BASED SURGICAL GUIDANCE

US20260123998A1US 20260123998 A1US20260123998 A1US 20260123998A1US-20260123998-A1

Abstract

A process includes graphically guiding a surgical workflow of a surgical procedure. The graphically guiding includes tracking object(s) by obtaining point cloud data based on imaging field(s) of view during the surgical procedure, and applying an artificial intelligence model to the point cloud data, and recognizing (i) object(s) in the field(s) of view and (ii) positioning of the object(s) in the field(s) of view. The object(s) include an object to be placed as part of the surgical procedure. The process displays graphical element(s) corresponding to the object on a display device and in a desired position relative to patient anatomy displayed as a live view to the patient anatomy and/or as a digital model of the patient anatomy on the display device, and updates properties of the graphical element(s), where the updating is based on detected positioning of the object in the field(s) of view, as informed by the tracking.

Inventors

  • Gokce Yildirim

Assignees

  • VENT CREATIVITY CORPORATION

Dates

Publication Date
20260507
Application Date
20260105

Claims (20)

  1. 1 . A computer-implemented method including: graphically guiding a surgical workflow of a surgical procedure, the graphically guiding including: tracking one or more objects by: obtaining point cloud data based on imaging one or more fields of view during the surgical procedure; and applying an artificial intelligence model to the obtained point cloud data, and recognizing, based on the applying, one or more objects in the one or more fields of view, and positioning of the one or more objects in the one or more fields of view, the one or more objects including an object to be placed as part of the surgical procedure; displaying at least one graphical element corresponding to the object on a display device and in a desired position relative to patient anatomy that is displayed as a live view to the patient anatomy and/or as a digital model of the patient anatomy on the display device; and updating properties of the at least one graphical element, wherein the updating is based on detected positioning of the object in the one or more fields of view, as informed by the tracking.
  2. 2 . The method of claim 1 , wherein the object includes an implant to be placed relative to the patient anatomy.
  3. 3 . The method of claim 1 , wherein the object includes a surgical alignment guide.
  4. 4 . The method of claim 3 , wherein the surgical alignment guide is a cut guide, wherein the at least one graphical element includes at least one representation of the cut guide, and wherein the method further includes displaying, on the display device, additional graphical elements that correspond to cut planes associated with cuts be made.
  5. 5 . The method of claim 1 , wherein the object includes a surgical tool used in the surgical procedure.
  6. 6 . The method of claim 1 , further including displaying, on the display device, an additional graphical element that corresponds to a drill hole location or screw placement location, wherein the displaying the additional graphical element includes positioning the additional graphical element in a desired position for the drill hole or screen placement relative to the patient anatomy that is displayed as a live view and/or as a digital model on the display device.
  7. 7 . The method of claim 1 , wherein the display device is a display device of smart glasses through which a user has a field of view of the one or more fields of view.
  8. 8 . The method of claim 1 , wherein the properties of the at least one graphical element include one or more colors or shading of the at least one graphical element.
  9. 9 . The method of claim 1 , wherein the patient anatomy includes bone and/or soft tissue having varying densities or tension at different portions of the patient anatomy, and wherein the method further includes displaying an additional graphical element representing the patient anatomy, wherein different portions of the additional graphical element correspond to the different portions of the patient anatomy, and wherein displaying the additional graphical element varies properties of the additional graphical element at the different portions of the additional graphical element according to the varying densities or tension at the corresponding different portions of the patient anatomy.
  10. 10 . The method of claim 1 , further including training the AI model by an object calibration approach that includes: pre-operatively imaging the object to be placed from varying angles and capturing a point-cloud of the object to be placed; and training the AI model using the point cloud to recognize the object to be placed during the surgical procedure.
  11. 11 . A computer system including: a memory; and a processing circuit in communication with the memory, wherein the computer system is configured to perform: graphically guiding a surgical workflow of a surgical procedure, the graphically guiding including: tracking one or more objects by: obtaining point cloud data based on imaging one or more fields of view during the surgical procedure; and applying an artificial intelligence model to the obtained point cloud data, and recognizing, based on the applying, one or more objects in the one or more fields of view, and positioning of the one or more objects in the one or more fields of view, the one or more objects including an object to be placed as part of the surgical procedure; displaying at least one graphical element corresponding to the object on a display device and in a desired position relative to patient anatomy that is displayed as a live view to the patient anatomy and/or as a digital model of the patient anatomy on the display device; and updating properties of the at least one graphical element, wherein the updating is based on detected positioning of the object in the one or more fields of view, as informed by the tracking.
  12. 12 . The computer system of claim 11 , wherein the object includes: an implant to be placed relative to the patient anatomy; or a surgical tool used in the surgical procedure.
  13. 13 . The computer system of claim 11 , wherein the object includes a surgical alignment guide, wherein the surgical alignment guide is a cut guide, wherein the at least one graphical element includes at least one representation of the cut guide, and wherein the computer system is further configured to perform displaying, on the display device, additional graphical elements that correspond to cut planes associated with cuts be made.
  14. 14 . The computer system of claim 11 , wherein the patient anatomy includes bone and/or soft tissue having varying densities or tension at different portions of the patient anatomy, and wherein the computer system is further configured to perform displaying an additional graphical element representing the patient anatomy, wherein different portions of the additional graphical element correspond to the different portions of the patient anatomy, and wherein displaying the additional graphical element varies properties of the additional graphical element at the different portions of the additional graphical element according to the varying densities or tension at the corresponding different portions of the patient anatomy.
  15. 15 . The computer system of claim 11 , wherein the computer system is further configured to perform training the AI model by an object calibration approach that includes: pre-operatively imaging the object to be placed from varying angles and capturing a point-cloud of the object to be placed; and training the AI model using the point cloud to recognize the object to be placed during the surgical procedure.
  16. 16 . A computer program product including: a computer readable storage medium readable by a processing circuit and storing instructions for execution by the processing circuit to perform: graphically guiding a surgical workflow of a surgical procedure, the graphically guiding including: tracking one or more objects by: obtaining point cloud data based on imaging one or more fields of view during the surgical procedure; and applying an artificial intelligence model to the obtained point cloud data, and recognizing, based on the applying, one or more objects in the one or more fields of view, and positioning of the one or more objects in the one or more fields of view, the one or more objects including an object to be placed as part of the surgical procedure; displaying at least one graphical element corresponding to the object on a display device and in a desired position relative to patient anatomy that is displayed as a live view to the patient anatomy and/or as a digital model of the patient anatomy on the display device; and updating properties of the at least one graphical element, wherein the updating is based on detected positioning of the object in the one or more fields of view, as informed by the tracking.
  17. 17 . The computer program product of claim 16 , wherein the object includes: an implant to be placed relative to the patient anatomy; or a surgical tool used in the surgical procedure.
  18. 18 . The computer program product of claim 16 , wherein the object includes a surgical alignment guide, wherein the surgical alignment guide is a cut guide, wherein the at least one graphical element includes at least one representation of the cut guide, and wherein the instructions for execution by the processing circuit are further to perform displaying, on the display device, additional graphical elements that correspond to cut planes associated with cuts be made.
  19. 19 . The computer program product of claim 16 , wherein the patient anatomy includes bone and/or soft tissue having varying densities or tension at different portions of the patient anatomy, and wherein the instructions for execution by the processing circuit are further to perform displaying an additional graphical element representing the patient anatomy, wherein different portions of the additional graphical element correspond to the different portions of the patient anatomy, and wherein displaying the additional graphical element varies properties of the additional graphical element at the different portions of the additional graphical element according to the varying densities or tension at the corresponding different portions of the patient anatomy.
  20. 20 . The computer program product of claim 16 , wherein the instructions for execution by the processing circuit are further to perform training the AI model by an object calibration approach that includes: pre-operatively imaging the object to be placed from varying angles and capturing a point-cloud of the object to be placed; and training the AI model using the point cloud to recognize the object to be placed during the surgical procedure.

Description

CROSS-REFERENCE TO RELATED APPLICATIONS This application is a continuation under 35 U.S.C. 111 (a) of International Application Number PCT/US2025/047532, entitled “IMAGE-BASED SURGICAL GUIDANCE”, filed Sep. 23, 2025, the entire contents of which are hereby incorporated by reference herein, and claims the priority benefit of U.S. Provisional Application No. 63/699,456, filed Sep. 26, 2024. BACKGROUND Surgical plans and surgical cutting guides are often used by surgeons to place (for instance: position, move, navigate, etc.) implants, instruments, and other three-dimensional (3D) objects in a surgical site for various purposes. The lack of accuracy and subjectivity of this process is well documented. Whether in a manual surgery or a navigated setting, surgeons place instruments based largely on experience or other subjective considerations. In any case, the placement of objects facilitates actions related to the surgery, for instance to execute cuts and place appropriate implants to improve patient conditions. The placement of these objects is based on various plan options. However, in some cases, these are led primarily by relatively simplistic visual guides like two-dimensional (2D) x-ray images. Using the example of a knee surgery, a single coronal plane x-ray image, or a combination of coronal plane x-ray image and sagittal plane x-ray image, might be relied upon by a surgeon to assess generally where the implant is to be positioned. Often a calibrated image of the implant, such as a transparent sheet with an outline of the implant, is placed over the x-ray(s) to determine an approximate desired implant location. However, there are inherent problems when positioning a 3D object based on a 2D representation of the anatomy. The limited information provided by the simplistic visual guide coupled with limited or no ability to exactly recreate the intended surgical approach can be problematic to delivery of the desired surgical outcome. Even in robotic surgical scenarios in which precise robotic cutting is employed, the robots are not used to place implants or other objects. As these tasks remain for the surgeon and/or other medical practitioners to perform, they remain susceptible to error. SUMMARY Shortcomings of the prior art are overcome and additional advantages are provided. In one or more embodiments, a computer-implemented method is provided that includes graphically guiding a surgical workflow of a surgical procedure. The graphically guiding includes tracking one or more objects by: obtaining point cloud data based on imaging one or more fields of view during the surgical procedure; and applying an artificial intelligence model to the obtained point cloud data, and recognizing, based on the applying, one or more objects in the one or more fields of view, and positioning of the one or more objects in the one or more fields of view, the one or more objects including an object to be placed as part of the surgical procedure. The graphically guiding further includes displaying at least one graphical element corresponding to the object on a display device and in a desired position relative to patient anatomy that is displayed as a live view to the patient anatomy and/or as a digital model of the patient anatomy on the display device, and updating properties of the at least one graphical element, wherein the updating is based on detected positioning of the object in the one or more fields of view, as informed by the tracking. Additionally or alternatively, in one or more embodiments the object includes an implant to be placed relative to the patient anatomy. Additionally or alternatively, in one or more embodiments, the object includes a surgical alignment guide. In one or more embodiments the surgical alignment guide is a cut guide, the at least one graphical element includes at least one representation of the cut guide, and the method further includes displaying, on the display device, additional graphical elements that correspond to cut planes associated with cuts be made. Additionally or alternatively, in one or more embodiments the object includes a surgical tool used in the surgical procedure. Additionally or alternatively, in one or more embodiments, the method further includes displaying, on the display device, an additional graphical element that corresponds to a drill hole location or screw placement location, where the displaying the additional graphical element includes positioning the additional graphical element in a desired position for the drill hole or screen placement relative to the patient anatomy that is displayed as a live view and/or as a digital model on the display device. Additionally or alternatively, in one or more embodiments, the display device is a display device of smart glasses through which a user has a field of view of the one or more fields of view. Additionally or alternatively, in one or more embodiments, the properties of the at least one graphical element include one o