CN-121987357-A - Obstacle avoidance techniques for surgical navigation
Abstract
The present disclosure relates to obstacle avoidance techniques for surgical navigation. Systems and methods are described herein in which a locator is configured to detect a position of a first object and a vision device is configured to generate a depth map of a surface in proximity to the first object. A virtual model corresponding to the first object is accessed and a positional relationship between the locator and the visual device in a common coordinate system is identified. An expected depth map of the visual device is then generated based on the detected position of the first object, the virtual model, and the positional relationship. A portion of the actual depth map that fails to match the expected depth map is identified, and a second object is identified based on the identified portion.
Inventors
- DW Waldemar Malak Butterworth base
- J Bowes
- R - T - DeLuca
Assignees
- 史赛克公司
Dates
- Publication Date
- 20260508
- Application Date
- 20200702
- Priority Date
- 20190703
Claims (20)
- 1. A surgical navigation system, comprising: a locator configured to detect a position of a surgical object in a coordinate system of the locator; a vision device configured to generate an actual depth map of the surgical object and a surface in the vicinity of the surgical object in a coordinate system of the vision device, and A controller coupled to the positioner and the vision device, the controller configured to: determining a positional relationship between the locator and the vision device; Accessing a virtual model corresponding to the surgical object; Identifying a position of the virtual model in a coordinate system of the vision device based on the position of the surgical object detected from the locator and a positional relationship between the locator and the vision device, and An actual depth map generated by the visual device is cropped based on the identified location of the virtual model in the coordinate system of the visual device.
- 2. The surgical navigation system of claim 1, wherein the controller clips the actual depth map by being configured to remove an area of the actual depth map that is greater than a threshold distance from a position of the virtual model in a coordinate system of the visual device.
- 3. The surgical navigation system of claim 1, wherein the controller clips the actual depth map by being configured to: Centering the shape on the position of the actual depth map, and Areas outside the shape of the actual depth map are removed.
- 4. A surgical navigation system according to claim 3, wherein the shape is user-selected.
- 5. A surgical navigation system according to claim 3, wherein the shape is program specific.
- 6. The surgical navigation system of claim 1, wherein the controller clips the actual depth map to a region of interest (ROI) for a surgical procedure.
- 7. The surgical navigation system of claim 1, wherein the surgical object is a target site to be treated.
- 8. The surgical navigation system of claim 7, wherein the controller is configured to generate a virtual boundary based on a position of the virtual model in a coordinate system of the visual device, the virtual boundary defining a constraint associated with treatment of the target site.
- 9. The surgical navigation system of claim 8, further comprising a robotic manipulator configured to support and move a surgical tool for treating the target site, wherein the robotic manipulator is controlled to limit movement of the surgical tool relative to the virtual boundary.
- 10. The surgical navigation system of claim 1, wherein the surgical object is a surgical instrument.
- 11. The surgical navigation system of claim 1, wherein the controller is configured to crop the actual depth map to reduce computations involved in processing the actual depth map.
- 12. The surgical navigation system of claim 1, wherein the controller is configured to obtain successive actual depth maps from the vision device and compare the successive actual depth maps to track movement of the surgical object.
- 13. The surgical navigation system of claim 12, wherein the controller is configured to: generating virtual boundaries for surfaces in the vicinity of the target object in the coordinate system of the vision apparatus, and Comparing the successive actual depth maps to track movement of the surface in the vicinity of the target object, and The virtual boundary is updated based on the tracked movement of the surface.
- 14. The surgical navigation system of claim 12, wherein the controller is configured to detect an inconsistency between consecutive actual depth maps and tracking data from the locator to identify that the locator has moved.
- 15. The surgical navigation system of claim 1, wherein the controller is configured to reconstruct a surface topography of the surgical object from the actual depth map.
- 16. The surgical navigation system of claim 1, wherein a tracker is rigidly coupled to the surgical object, and the controller is configured to: Detecting the position of the tracker in the coordinate system of the locator via the locator, and A position of the virtual model in a coordinate system of the locator is identified based on the detected position of the tracker and a fixed positional relationship between the tracker and the surgical object.
- 17. The surgical navigation system of claim 1, wherein to determine a positional relationship between the locator and the vision device, the controller is configured to: projecting a light pattern onto a surface visible to both the locator and the vision device; Identifying a position of the projected light pattern in a coordinate system of the locator; Identifying a position of the projected light pattern in a coordinate system of the vision device, and The positional relationship is determined based on the position of the projected light pattern in the coordinate system of the locator and the position in the coordinate system of the vision apparatus.
- 18. The surgical navigation system of claim 1, wherein: the positioner being configured to operate in a first spectral band, and The vision device is configured to operate in a second spectral band different from the first spectral band.
- 19. The surgical navigation system of claim 1, wherein the vision device generates an actual depth map by being configured to project a structured light pattern onto the surgical object and a surface in proximity to the surgical object and to detect distortion of the structured light pattern.
- 20. The surgical navigation system of claim 1, wherein the vision device is configured to generate the actual depth map using time-of-flight measurements of invisible light.
Description
Obstacle avoidance techniques for surgical navigation The present application is a divisional application of application number 202080054277.6, entitled "obstacle avoidance technique for surgical navigation", having application date of 2020, 7, 2 and 2. Cross Reference to Related Applications The present application claims priority and ownership of U.S. provisional patent application No. 62/870,284 filed on 7/3 2019, the contents of which are hereby incorporated by reference in their entirety. Technical Field The present disclosure relates generally to surgical navigation systems. Background The surgical navigation system facilitates positioning of the surgical instrument relative to a target volume of patient tissue for treatment. During a surgical procedure, the target volume to be treated is often located near sensitive anatomy and surgical tools that should be avoided. Tracking these adjacent anatomical structures using an attached tracker is often difficult due to the flexible nature of the structure. Furthermore, attaching a tracker to each object adjacent to the target volume crowds the surgical workspace and increases the cost and complexity of the surgical navigation system. Disclosure of Invention In a first aspect, a navigation system is provided that includes a locator configured to detect a first object, a visual device configured to generate an actual depth map of a surface in proximity to the first object, and a controller coupled to the locator and the visual device, the controller configured to access a virtual model corresponding to the first object, identify a positional relationship between the locator and the visual device in a common coordinate system, generate an expected depth map of the visual device based on the detected position of the first object, the virtual model, and the positional relationship, identify a portion of the actual depth map that fails to match the expected depth map, and identify a second object based on the identified portion. In a second aspect, a robotic manipulator is utilized with the navigation system of the first aspect, wherein the robotic manipulator supports a surgical tool and includes a plurality of links and a plurality of actuators configured to move the links to move the surgical tool, and wherein the robotic manipulator is controlled to avoid the second object. In a third aspect, a method of operating a navigation system is provided, the navigation system comprising a locator configured to detect a position of a first object, a vision device configured to generate an actual depth map of a surface in proximity to the first object, and a controller coupled to the locator and the vision device, the method comprising accessing a virtual model corresponding to the first object, identifying a positional relationship between the locator and the vision device in a common coordinate system, generating an expected depth map of the vision device based on the detected position of the first object, the virtual model, and the positional relationship, identifying a portion of the actual depth map that fails to match the expected depth map, and identifying a second object based on the identified portion. In a fourth aspect, a computer program product is provided comprising a non-transitory computer-readable medium having instructions stored thereon, which when executed by one or more processors are configured to implement the method of the third aspect. According to one implementation of any of the above aspects, the locator is configured to be an optical locator configured to detect an optical feature associated with the first object, an electromagnetic locator configured to detect an electromagnetic feature associated with the first object, an ultrasound locator configured to detect the first object with or without any tracker, an inertial locator configured to detect an inertial feature associated with the first object, or any combination of the foregoing. According to one implementation of any of the above aspects, the first object may be any of the anatomy or bone of a patient, equipment in an operating room such as, but not limited to, a robotic manipulator, a hand-held instrument, an end effector or tool attached to the robotic manipulator, an operating table, a mobile cart, an operating table on which a patient may be placed, an imaging system, a retractor, or any combination of the foregoing. According to one implementation of any of the above aspects, the vision device is coupled to any of the positioner, a unit separate from the positioner, a camera unit of the navigation system, an adjustable arm, the robotic manipulator, an end effector, a hand tool, a surgical boom system such as a ceiling-mounted boom, a limb fixation device, or any combination of the foregoing. According to one implementation of any of the above aspects, the surface in the vicinity of the first object may be a surface adjacent to the first object, a surface spaced a distance from