Search

US-12621420-B2 - Augmented reality guidance for spinal surgery

US12621420B2US 12621420 B2US12621420 B2US 12621420B2US-12621420-B2

Abstract

Embodiments disclose a real-time surgery method and apparatus for displaying a stereoscopic augmented view of a patient from a static or dynamic viewpoint of the surgeon, which employs real-time three-dimensional surface reconstruction for preoperative and intraoperative image registration. Stereoscopic cameras provide real-time images of the scene including the patient. A stereoscopic video display is used by the surgeon, who sees a graphical representation of the preoperative or intraoperative images blended with the video images in a stereoscopic manner through a see-through display.

Inventors

  • Carlos Quiles Casas

Assignees

  • ONPOINT MEDICAL, INC.

Dates

Publication Date
20260505
Application Date
20250116

Claims (20)

  1. 1 . A system comprising: a stereoscopic optical see-through head mounted display; at least one computing system; and at least one camera, at least one 3D scanner, or at least one camera and at least one 3D scanner, wherein the at least one computing system is configured to track at least a portion of a physical spine of a patient in a coordinate system using the at least one camera, the at least one 3D scanner, or the at least one camera and the at least one 3D scanner, wherein the at least one computing system is configured to receive preoperative image information of the physical spine of the patient in supine position, wherein the at least one computing system is configured to generate two or more individually rendered 3D surface representations of each of two or more individual vertebrae from the preoperative image information, wherein the at least one computing system is configured to translate the supine position of the spine of the patient in the preoperative image information to a prone position of the physical spine of the patient by determining registrations for each of the two or more individually rendered 3D surface representations with two or more physical individual vertebrae of the tracked physical spine in the coordinate system, wherein the at least one computing system is configured to generate a 3D stereoscopic view, displayed by the stereoscopic optical see-through head mounted display, the 3D stereoscopic view comprising the two or more individually rendered 3D surface representations, wherein the at least one computing system is configured to superimpose the 3D stereoscopic view comprising the two or more individually rendered 3D surface representations on the two or more physical individual vertebrae, and wherein the at least one computing system is configured to adjust in real time the 3D stereoscopic view responsive to movement of the stereoscopic optical see-through head mounted display.
  2. 2 . The system of claim 1 , wherein the preoperative image information comprises a preoperative CT scan, a preoperative MRI scan, or a combination thereof.
  3. 3 . The system of claim 1 , wherein positions of the two or more physical individual vertebrae in the coordinate system are determined based on two-dimensional (2D) intra-operative images.
  4. 4 . The system of claim 3 , wherein the two-dimensional intraoperative images comprise an ultrasound, an x-ray, or a combination thereof.
  5. 5 . The system of claim 4 , wherein the x-ray comprises a lateral projection, an anteroposterior projection, or a combination thereof.
  6. 6 . The system of claim 1 , wherein the system is configured to track at least a portion of a physical instrument, at least a portion of a physical device, or a combination thereof in the coordinate system using the at least one camera, the at least one 3D scanner, or the at least one camera and the at least one 3D scanner.
  7. 7 . The system of claim 6 , wherein the at least one computing system is configured to generate a three-dimensional (3D) surface representation of the at least portion of the tracked physical instrument, at least portion of the tracked physical device, or a combination thereof.
  8. 8 . The system of claim 7 , wherein the 3D stereoscopic view comprises the 3D surface representation of the at least portion of the tracked physical instrument, at least portion of the tracked physical device, or combination thereof.
  9. 9 . The system of claim 2 , wherein the preoperative CT scan, preoperative MRI scan, or combination thereof comprises 2D slice images, a 3D image dataset, or a combination thereof.
  10. 10 . The system of claim 2 , wherein the at least one computing system is configured to generate at least one graphical representation based on the preoperative CT scan, preoperative MRI scan, or combination thereof, or wherein the at least one computing system is configured to generate at least one graphical representation of a virtual pin, a virtual screw, a virtual nail, a virtual plate, or a combination thereof.
  11. 11 . The system of claim 3 , wherein the registrations comprise a 2D-3D registration of a preoperative CT scan, a preoperative MRI scan, or a combination thereof with the 2D intraoperative images.
  12. 12 . The system of claim 1 , wherein the at least one camera is head mounted with the stereoscopic optical see-through head mounted display, or wherein the at least one 3D scanner is head mounted with the stereoscopic optical see-through head mounted display, or wherein the at least one camera and the at least one 3D scanner are head mounted with the stereoscopic optical see-through head mounted display.
  13. 13 . The system of claim 1 , wherein the at least one camera is separate from the stereoscopic optical see-through head mounted display, or wherein the at least one 3D scanner is separate from the stereoscopic optical see-through head mounted display, or wherein the at least one camera and the at least one 3D scanner are separate from the stereoscopic optical see-through head mounted display.
  14. 14 . The system of claim 1 , wherein the system comprises at least one camera and/or 3D scanner head mounted with the stereoscopic optical see-through head mounted display and at least one camera and/or 3D scanner separate from the stereoscopic optical see-through head mounted display.
  15. 15 . The system of claim 14 , wherein the at least one camera and/or 3D scanner head mounted with the stereoscopic optical see-through head mounted display and the at least one camera and/or 3D scanner separate from the stereoscopic optical see-through head mounted display are configured for tracking at least a portion of a physical instrument, at least a portion of a physical device, or a combination thereof in the coordinate system.
  16. 16 . The system of claim 6 , wherein the at least one computing system is configured to generate a 2D or a 3D graphical representation of the at least portion of the tracked physical instrument, at least portion of the tracked physical device or a combination thereof, and wherein the at least one computing system is configured to generate a view comprising the 2D or 3D graphical representation of the at least portion of the tracked physical instrument, the at least portion of the tracked physical device or combination thereof, wherein the 2D or 3D graphical representation comprises a virtual trajectory for the at least portion of the tracked physical instrument, a virtual trajectory for the at least portion of the tracked physical device, a virtual template of the at least portion of the tracked physical instrument, a virtual template of the at least portion of the tracked physical device, or a combination thereof.
  17. 17 . The system of claim 1 , further comprising one or more markers, wherein the one or more markers comprise a marker configured to be attached to a physical instrument, a marker configured to be attached to a physical device, a marker configured to be attached to a bony structure of the physical spine, a marker configured to be attached to the stereoscopic optical see-through head mounted display or a combination thereof, and wherein the at least one computing system is configured to track the one or more markers using the at least one camera, the at least one 3D scanner or the at least one camera and the at least one 3D scanner, or further comprising one or more markers, wherein the one or more markers comprise a marker configured to be attached to a physical instrument, a marker configured to be attached to a physical device, a marker configured to be attached to a bony structure of the physical spine, a marker configured to be attached to the stereoscopic optical see-through head mounted display or a combination thereof, and wherein the at least one computing system is configured to track the one or more markers using the at least one camera, the at least one 3D scanner or the at least one camera and the at least one 3D scanner, wherein the one or more markers comprise a color marker, a reflective marker, a passive marker, an active marker, or a combination thereof.
  18. 18 . The system of claim 4 , wherein the system comprises at least one radiopaque marker attached to a bony structure, wherein the at least one radiopaque marker is included in the 2D intraoperative images.
  19. 19 . The system of claim 18 , wherein the at least one computing system is configured to register the 2D intraoperative images with the positions of the two or more physical individual vertebrae in the coordinate system using the at least one radiopaque marker.
  20. 20 . The system of claim 1 , wherein the at least one computing system comprises the computing system configured to track the at least portion of the physical spine of the patient, the computing system configured to receive preoperative image information of the physical spine of the patient, the computing system configured to generate the two or more individually rendered 3D surface representations of each of the two or more individual vertebrae from the preoperative image information, the computing system configured to translate the supine position of the spine of the patient in the preoperative image information to the prone position, the computing system configured to generate the 3D stereoscopic view, the computing system configured to superimpose the 3D stereoscopic view comprising the two or more individually rendered 3D surface representations on the two or more physical individual vertebrae, and wherein the at least one computing systems are the same or are different.

Description

CROSS-REFERENCE TO RELATED APPLICATIONS This application is a continuation of U.S. application Ser. No. 18/533,283, filed Dec. 8, 2023, which is a continuation of U.S. application Ser. No. 18/352,778, filed Jul. 14, 2023, now U.S. Pat. No. 12,010,285, which is a continuation of U.S. application Ser. No. 18/048,942, filed Oct. 24, 2022, now U.S. Pat. No. 11,750,788, which is a continuation of U.S. application Ser. No. 17/748,614, filed May 19, 2022, now U.S. Pat. No. 11,483,532, which is a continuation of U.S. application Ser. No. 17/667,671, filed Feb. 9, 2022, now U.S. Pat. No. 11,350,072, which is a continuation of U.S. application Ser. No. 17/496,312, filed Oct. 7, 2021, now U.S. Pat. No. 11,272,151, which is a continuation of U.S. application Ser. No. 17/332,149, filed May 27, 2021, now U.S. Pat. No. 11,153,549, which is a continuation of U.S. application Ser. No. 17/166,440, filed Feb. 3, 2021, now U.S. Pat. No. 11,050,990, which is a continuation of U.S. application Ser. No. 17/065,911, filed Oct. 8, 2020, now U.S. Pat. No. 10,951,872, which is a continuation of U.S. application Ser. No. 16/919,639, filed Jul. 2, 2020, now U.S. Pat. No. 10,841,556, which is a continuation of U.S. application Ser. No. 16/822,062, filed Mar. 18, 2020, now U.S. Pat. No. 10,742,949, which is a continuation of U.S. application Ser. No. 16/598,697, filed Oct. 10, 2019, now U.S. Pat. No. 10,602,114, which is a continuation of U.S. application Ser. No. 16/518,426, filed Jul. 22, 2019, now U.S. Pat. No. 10,511,822, which is a continuation of U.S. application Ser. No. 16/240,937, filed Jan. 7, 2019, which is a continuation of U.S. application Ser. No. 15/972,649, filed May 7, 2018, now U.S. Pat. No. 10,194,131, which is a continuation of U.S. application Ser. No. 14/753,705, filed Jun. 29, 2015, now U.S. Pat. No. 10,154,239, which claims the benefit of and priority to provisional application No. 62/097,771, filed Dec. 30, 2014. The entirety of each these applications are hereby incorporated herein by reference. BACKGROUND INFORMATION Field of the Invention Embodiments are directed towards image-guided surgery, and more particularly CT-guided, MR-guided, fluoroscopy-based or surface-based image-guided surgery, wherein images of a portion of a patient are taken in the preoperative or intraoperative setting and used during surgery for guidance. Background In the practice of surgery, an operating surgeon is generally required to look back and forth between the patient and a monitor displaying patient anatomical information for guidance in operation. In this manner, a type of mental mapping is made by the surgeon to understand the location of the target structures. However, this type of mental mapping is difficult, has a steep learning curve, and compromises the accuracy of the information used. Equipment has been developed by many companies to provide intraoperative interactive surgery planning and display systems, mixing live video of the external surface of the patient with interactive computer-generated models of internal anatomy obtained from medical diagnostic imaging data of the patient. The computer images and the live video are coordinated and displayed to a surgeon in real time during surgery, allowing the surgeon to view internal and external structures and the relationship between them simultaneously, and adjust the surgery accordingly. Preoperative or intraoperative image registration with surface reconstruction has been done in conventional surgery navigation systems either with a single 3D scanner device that functions at the same time as video camera (e.g. time-of-flight cameras). These conventional systems display the surgeon's main viewpoint, or a video camera or stereoscopic video cameras that are used as viewpoint for the surgeon are used for processing a surface reconstruction. These conventional systems may enhance the surface reconstruction or image registration with other techniques, such as optical or infrared techniques, markers, etc. However, these systems are limited in the availability of precise 3D surfaces, in their precision and speed of image registration of preoperative or intraoperative image with the 3D surfaces, and in blending such registered images with the viewpoint of the surgeon. Accordingly needs exist for more effective systems and methods that combine real-time preoperative images with virtual graphics associated with the preoperative images, wherein the combination of the preoperative images and virtual graphics is displayed on a stereoscopic, see through, head mounted display. SUMMARY OF THE INVENTION Embodiments disclosed here describe a real-time surgery navigation method and apparatus for displaying an augmented view of the patient from the preferred static or dynamic viewpoint of the surgeon. Embodiments utilize a surface image, a graphical representation the internal anatomic structure of the patient processed from preoperative or intraoperative images, and a computer registering b