US-20260124003-A1 - SURGICAL OBJECT TRACKING TEMPLATE GENERATION FOR COMPUTER ASSISTED NAVIGATION DURING SURGICAL PROCEDURE
Abstract
A camera tracking system for computer assisted navigation during surgery. The camera tracking system includes a processor operative to receive streams of video frames from tracking cameras which image a plurality of physical objects arranged as a reference array. For each of the physical objects imaged in a sequence of the video frames, that processor determines a set of coordinates for the physical object over the sequence of the video frames. For each of the physical objects, the processor generates an arithmetic combination of the set of coordinates for the physical object. The processor generates an array template identifying coordinates of the physical objects based on the arithmetic combinations of the sets of coordinates for the physical objects, and tracks pose of the physical objects of the reference array over time based on comparison of the array template to the reference array imaged in the streams of video frames.
Inventors
- Neil R. Crawford
- Thomas CALLOWAY
Assignees
- GLOBUS MEDICAL, INC.
Dates
- Publication Date
- 20260507
- Application Date
- 20251230
Claims (18)
- 1 . A method comprising: receiving streams of video frames from tracking cameras which image a plurality of physical objects arranged as a reference array; for each of the physical objects which is imaged in a sequence of the video frames, determining a set of coordinates for the physical object over the sequence of the video frames; for each of the physical objects, generating an arithmetic combination of the set of coordinates for the physical object; generating an array template identifying coordinates of the physical objects based on the arithmetic combinations of the sets of coordinates for the physical objects; and tracking pose of the physical objects of the reference array over time based on comparison of the array template to the reference array imaged in the streams of video frames, wherein a user is configured to perform surgery using gesture controls monitored by an extending reality headset to control the robotic movements of the surgical robot.
- 2 . The method of claim 1 , wherein the physical objects are circular fiducials spaced apart in an arrangement that identifies the reference array, and the method further comprising: selecting a nominal array template from among a set of initial array templates stored in memory of the camera tracking system based on coordinates of the circular fiducials of the reference array determined in the video frames of the streams from the tracking cameras; and generating the array template based on comparison of the nominal array template to the arithmetic combinations of the sets of coordinates for the physical objects.
- 3 . The method of claim 1 , further comprising generating the arithmetic combination of the set of coordinates for the physical object based on averaging the coordinates of the set for the physical object.
- 4 . The method of claim 3 , further comprising ceasing generation of the array template and begin using the array template to track pose of the physical objects of the reference array based on when a standard deviation computed from at least one of the averages of the coordinates of the sets for the physical objects satisfies a defined accuracy rule.
- 5 . The method of claim 1 , wherein for each of the physical objects which is imaged in a sequence of the video frames, the determination of the set of coordinates for the physical object over the sequence of the video frames comprises sorting the determined coordinates of the physical object to be included within one of the sets of coordinates based on how similar the determined coordinates of the physical object are to other coordinates of the physical object that are listed in each of the other sets.
- 6 . The method of claim 1 , further comprising for each one of a plurality of different perspectives between the tracking cameras and the reference array, repeating for each of the physical objects which is imaged in the sequence of the video frames, the determination of a set of coordinates for each of the physical objects over the sequence of the video frames for the one of the perspectives between the tracking cameras and the reference array, and repeating for each of the physical objects, the generation of an arithmetic combination of the set of coordinates for the physical object for the one of the perspectives between the tracking cameras and the reference array; and generating the array template based on the arithmetic combinations of the sets of coordinates of the physical objects which are generated for the plurality of different perspectives between the tracking cameras and the reference array.
- 7 . The method of claim 6 , further comprising receiving the streams of video frames from a plurality of sets of tracking cameras, wherein the tracking cameras of one of the sets are spaced apart on an auxiliary camera bar and the tracking cameras of another one of the sets are spaced apart on an extended reality headset; and the set of tracking cameras on the auxiliary camera bar are spaced apart from the set of tracking cameras on the extended reality headset to provide the plurality of different perspectives.
- 8 . The method of claim 7 , further comprising storing, in a memory of the camera tracking system, the video frames in the streams received from the set of tracking cameras on the auxiliary camera bar with a time synchronized relationship to the video frames in the streams received from the set of tracking cameras on the extended reality headset; correlating in time the arithmetic combinations of the sets of coordinates of the physical objects which are generated for the plurality of different perspectives between the tracking cameras and the reference array; and generating the array template based on combining the correlated in time arithmetic combinations.
- 9 . The method of claim 6 , further comprising establishing a reference snapshot of coordinates of the physical objects determined from at least one video frame of each of the streams from the tracking cameras, wherein the determination of the set of coordinates for the physical object is performed following establishment of the reference snapshot; and for each of the physical objects which is imaged in the sequence of the video frames, transforming the determined set of coordinates of the physical object over the sequence of the video frames based on their differences from the coordinates of the physical objects in the reference snapshot.
- 10 . The method of claim 9 , further comprising initiating a generation of the array template responsive to a determination that the arithmetic combinations of the sets of coordinates for the physical objects have been performed for at least a threshold number of different perspectives between the tracking cameras and the reference array having at least a threshold angular offset between the plurality of different perspectives.
- 11 . The method of claim 10 , further comprising providing a graphical indication of programmatic progress to register the reference array with the camera tracking system based on how many of determinations of the arithmetic combinations of the sets of coordinates for the physical objects for different perspectives between the tracking cameras and the reference array have been performed relative to the threshold number of different perspectives having at least the threshold angular offset.
- 12 . The method of claim 1 , further comprising determining a six degree of freedom pose of the reference array based on comparison of the coordinates of the physical objects determined in the video frames of the streams to a nominal array template defining coordinates of the physical objects; retrieving from memory a three dimensional model of an instrument connected to the reference array, the three dimensional model defining coordinates of an optically identifiable feature of the instrument relative to the physical objects of the reference array; identifying regions of interest within the video frames enclosing the physical objects of the reference array and the optically identifiable feature, based on the six degree of freedom pose of the reference array and the three dimensional model of the instrument; for the optically identifiable feature and the physical objects which are imaged in the sequence of the video frames, determining the sets of coordinates for the optically identifiable feature and the physical objects over the sequence of the video frames based on locations of the regions of interest within the video frames in the sequence; for the optically identifiable feature and the physical objects, generating the arithmetic combinations of the sets of coordinates for the optically identifiable feature and the physical objects; generating the array template identifying coordinates of the optically identifiable feature and the physical objects based on the arithmetic combinations of the sets of coordinates for the optically identifiable feature and the physical objects; and tracking pose of the optically identifiable feature and the physical objects over time based on comparison of the array template to the reference array imaged in the streams of video frames.
- 13 . The method of claim 12 , further comprising generating a set of rotated three dimensional models of the instrument connected to the reference array, each of the rotated three dimensional models being generated based on rotation of the three dimensional model retrieved from memory to correspond to different ones of a plurality of perspectives between the tracking cameras and the reference array, wherein the identification of the regions of interest is repeated for each of the streams of video frames using one of the rotated three dimensional models corresponding to perspective between the tracking camera from which the stream is received.
- 14 . The method of claim 12 , further comprising for each of the optically identifiable feature and the physical objects, generating the arithmetic combinations of the sets of coordinates for the optically identifiable feature and the physical objects based on averaging the coordinates of the set for the one of the optically identifiable feature and the physical objects.
- 15 . The method of claim 14 , further comprising: ceasing generation of the array template and begin using the array template to track pose of the optically identifiable feature and the physical objects over time based on when a standard deviation computed from at least one of the averages of the coordinates of the sets for the optically identifiable feature and the physical objects satisfies a defined accuracy rule.
- 16 . The method of claim 12 , further comprising outputting an estimate of how accurately the camera tracking system tracks pose of the optically identifiable feature and the reference array.
- 17 . The method of claim 1 , further comprising: tracking pose of an instrument attached to the reference array based on the pose of the physical objects of the reference array and a three dimensional model of the instrument connected to the reference array; generating steering information based on comparison of the pose of an instrument relative to a planned pose of the instrument, wherein the steering information indicates where the instrument needs to be moved and angularly oriented to become aligned with the planned pose when performing a surgical procedure.
- 18 . The method of claim 1 , further comprising: generating a safety notification to a user based on a determination of at least a threshold deviation between the array template identifying coordinates of the physical objects and coordinates of the physical objects that are determined in the video frames.
Description
CROSS-REFERENCE TO RELATED APPLICATIONS This application is a continuation of U.S. patent application Ser. No. 18/357,470 filed on Jul. 24, 2023, which is a continuation of U.S. patent application Ser. No. 17/009,841 filed on Sep. 2, 2020, all of which are incorporated by reference in their entireties herein for all purposes. FIELD The present disclosure relates to medical devices and systems, and more particularly, camera tracking systems used for computer assisted navigation during surgery. BACKGROUND Computer assisted navigation during surgery can provide a surgeon with computerized visualization of the present pose of a surgical tool relative to medical images of a patient's anatomy. Camera tracking systems for computer assisted navigation use one or more stereo camera systems to track a set of fiducials attached to a surgical tool which is being positioned by a surgeon or other user during surgery. The set of fiducials, also referred to as a dynamic reference array, allows the camera tracking system to determine a pose of the surgical tool relative to anatomical structure within a medical image and relative to a patient for display to the surgeon. The surgeon can thereby use the real-time pose feedback to navigate the surgical tool during a surgical procedure. Navigated surgery procedures using existing navigation systems are prone to events triggering intermittent pauses when tracked objects are moved outside a tracking area of the camera system or become obstructed from camera view by intervening personnel and/or equipment. There is also a need to improve the tracking accuracy of navigation systems. SUMMARY Some embodiments are directed to a camera tracking system for computer assisted navigation during surgery is provided. The camera tracking system includes a processor operative to receive streams of video frames from tracking cameras which image a plurality of physical objects (fiducials) arranged as a reference array. For each of the physical objects which is imaged in a sequence of the video frames, the processor determines a set of coordinates for the physical object over the sequence of the video frames. For each of the physical objects, the processor generates an arithmetic combination of the set of coordinates for the physical object. The processor generates an array template identifying coordinates of the physical objects based on the arithmetic combinations of the sets of coordinates for the physical objects, and tracks pose of the physical objects of the reference array over time based on comparison of the array template to the reference array imaged in the streams of video frames. In some further embodiments, the processor generates the arithmetic combination of the set of coordinates for the physical object based on averaging the coordinates of the set for the physical object. The processor may be operative to cease generation of the array template and begin using the array template to track pose of the physical objects of the reference array based on when a standard deviation computed from at least one of the averages of the coordinates of the sets for the physical objects satisfies a defined accuracy rule. Some other embodiments are directed to a surgical system including a camera tracking system including a camera bar, first and second tracking cameras attached at spaced apart locations on the camera bar, and a processor. The processor is operative to receive streams of video frames from the first and second tracking cameras which image a plurality of physical objects arranged as a reference array, for each of the physical objects which is imaged in a sequence of the video frames determine a set of coordinates for the physical object over the sequence of the video frames, for each of the physical objects generate an arithmetic combination of the set of coordinates for the physical object, generate an array template identifying coordinates of the physical objects based on the arithmetic combinations of the sets of coordinates for the physical objects, and track pose of the physical objects of the reference array over time based on comparison of the array template to the reference array imaged in the streams of video frames. In some further embodiments, the processor tracks pose of an instrument attached to the reference array based on the pose of the physical objects of the reference array and a three dimensional model of the instrument connected to the reference array. The processor generates steering information based on comparison of the pose of an instrument relative to a planned pose of the instrument. The steering information indicates where the instrument needs to be moved and angularly oriented to become aligned with the planned pose when performing a surgical procedure. Other camera tracking systems and surgical systems will be or become apparent to one with skill in the art upon review of the following drawings and detailed description. It is intended that all such camera tr