US-12626422-B2 - Systems and methods for clinical workspace simulation
Abstract
A computer-implemented method for clinical workspace simulation includes capturing a real-world environment by an imaging device of an augmented reality headset and generating a composite view by rendering a first virtual object relative to a surgical table in the real-world environment. Captured real-world environment and the rendered first virtual object are combined in the composite view, which is displayed on a display of the augmented reality headset worn by a user.
Inventors
- Max L. Balter
- Michael A. Eiden
- William J. Peine
- Unnas W. Hussain
- Justin R. Chen
Assignees
- COVIDIEN LP
Dates
- Publication Date
- 20260512
- Application Date
- 20240315
Claims (19)
- 1 . A method for setting up a surgical robotic system, the method comprising: capturing a real-world environment including an operating room using an imaging device of an augmented reality headset; detecting a patient in the real-world environment by the imaging device; generating a composite view by: rendering a plurality of robotic arms; determining a plurality of surgical port entry points in the patient in the real-world environment based on the composite view; generating an optimized placement location for each robotic arm of the plurality of robotic arms based on the plurality of surgical port entry points in the patient in the real-world environment; and combining the captured real-world environment and the rendered plurality of robotic arms; and displaying the composite view on a display of the augmented reality headset including the plurality of robotic arms each of which is displayed at its respective optimized placement location.
- 2 . The method according to claim 1 , wherein each robotic arm of the plurality of robotic arms is movable based on user input.
- 3 . The method according to claim 2 , further comprising: analyzing each robotic arm of the plurality of robotic arms for potential collision with one or more other robotic arms of the plurality of robotic arms.
- 4 . The method according to claim 3 , further comprising: automatically adjusting at least one of position or orientation of at least one robotic arm of the plurality of robotic arms in response to detecting the potential collision.
- 5 . The method according to claim 3 , further comprising: providing at least one corrective step for adjusting at least one of position or orientation of at least one robotic arm of the plurality of robotic arms in response to detecting the potential collision.
- 6 . The method according to claim 1 , further comprising: rendering the plurality of surgical port entry points in the patient.
- 7 . The method according to claim 1 , wherein the optimized placement location for each robotic arm of the plurality of robotic arms is based on one surgical port entry point of the plurality of surgical port entry points.
- 8 . An augmented reality headset for setting up a surgical robotic system, the augmented reality headset comprising: an imaging device configured to capture images of a real-world environment; a display configured to display a composite view; a processor; and a memory, including instructions stored thereon, which, when executed by the processor, cause the augmented reality headset to: capture the real-world environment including an operating room using the imaging device of an augmented reality headset; detect a patient in the real-world environment by the imaging device; generate a composite view by: rendering a plurality of robotic arms; determining a plurality of surgical port entry points in the patient in the real-world environment based on the composite view; generating an optimized placement location for each robotic arm of the plurality of robotic arms based on the plurality of surgical port entry points in the patient in the real-world environment; and combining the captured real-world environment and the rendered plurality of robotic arms; and display the composite view on the display of the augmented reality headset including the plurality of robotic arms each of which is displayed at its respective optimized location.
- 9 . The augmented reality headset according to claim 8 , wherein each robotic arm of the plurality of robotic arms is movable based on user input.
- 10 . The augmented reality headset according to claim 9 , wherein the instructions, when executed by the processor, further cause the augmented reality headset to: analyze each robotic arm of the plurality of robotic arms for potential collision with one or more other robotic arms of the plurality of robotic arms.
- 11 . The augmented reality headset according to claim 10 , wherein the instructions, when executed by the processor, further cause the augmented reality headset to: automatically adjust at least one of position or orientation of at least one robotic arm of the plurality of robotic arms in response to detecting the potential collision.
- 12 . The augmented reality headset according to claim 10 , wherein the instructions, when executed by the processor, further cause the augmented reality headset to: provide at least one corrective steps for adjusting at least one of position or orientation of at least one robotic arm of the plurality of robotic arms in response to detecting the potential collision.
- 13 . The augmented reality headset according to claim 8 , wherein the instructions, when executed by the processor, further cause the augmented reality headset to: render the plurality of surgical port entry points in the patient.
- 14 . The augmented reality headset according to claim 8 , wherein the optimized placement location for each robotic arm of the plurality of robotic arms is based on one surgical port entry point of the plurality of surgical port entry points.
- 15 . A surgical augmented reality generator comprising non-transitory computer-readable medium storing instructions which, when executed by a processor, cause the processor to perform a method comprising: capturing a real-world environment including an operating room using an imaging device of an augmented reality headset; detecting a patient in the real-world environment by the imaging device; generating a composite view by: rendering a plurality of robotic arms; determining a plurality of surgical port entry points in the patient in the real-world environment based on the composite view; generating an optimized placement location for each robotic arm of the plurality of robotic arms based on the plurality of surgical port entry points in the patient in the real-world environment; and combining the captured real-world environment and the rendered plurality of robotic arms; and displaying the composite view on a display of the augmented reality headset including the plurality of robotic arms each of which is displayed at its respective optimized location.
- 16 . The surgical augmented reality generator according to claim 15 , wherein each robotic arm of the plurality of robotic arm is movable based on user input.
- 17 . The surgical augmented reality generator according to claim 16 , wherein the instructions, when executed by the processor, further cause the processor to: analyze each robotic arm of the plurality of robotic arms for potential collision with one or more other robotic arms of the plurality of robotic arms.
- 18 . The surgical augmented reality generator according to claim 17 , wherein the instructions, when executed by the processor, further cause the processor to: automatically adjust at least one of position or orientation of at least one robotic arm of the plurality of robotic arms in response to detecting the potential collision.
- 19 . The surgical augmented reality generator according to claim 17 , wherein the instructions, when executed by the processor, further cause the processor to: provide at least one corrective steps for adjusting at least one of position or orientation of at least one robotic arm of the plurality of robotic arms in response to detecting the potential collision.
Description
CROSS-REFERENCE TO RELATED APPLICATION This application is a continuation of U.S. patent application Ser. No. 17/735,604, filed on May 3, 2022 which claims the benefit of and priority to U.S. Patent Provisional Application No. 63/194,211, filed on May 28, 2021. The entire disclosures of the foregoing applications are incorporated by reference herein. BACKGROUND Technical Field The disclosure generally relates to systems and methods for clinical workspace simulations. In particular, the present disclosure is directed to a virtual or augmented reality simulated setup of surgical robotic systems. Background of Related Art Surgical robotic systems are currently being used in minimally invasive medical procedures. Some surgical robotic systems include a surgical console controlling a surgical robotic arm and a surgical instrument having an end effector (e.g., forceps or grasping instrument) coupled to and actuated by the robotic arm. In operation, the robotic arm is moved to a position over a patient and then guides the surgical instrument into a small incision via a surgical port or a natural orifice of a patient to position the end effector at a worksite within the patient's body. Setup time for robotic surgical systems can be lengthy, and may not account for potential collisions between robotic arms during a surgery. Thus, there is a need for systems to determine initial robotic system component placement. SUMMARY In accordance with aspects of the disclosure, a computer-implemented method for clinical workspace simulation is presented. The method includes capturing a real-world environment by an imaging device of an augmented reality headset and generating a composite view. The composite view is generated by rendering a first virtual object relative to a surgical table in the real-world environment and combining the captured real-world environment and the rendered first virtual object. The method further includes displaying the composite view on a display of the augmented reality headset. In an aspect of the disclosure, wherein the method may further include rendering a second virtual object in the composite view and detecting a potential collision with the second virtual object. In another aspect of the disclosure, the second virtual object may include a virtual robotic arm, the surgical table, a control tower, and/or a console. In yet another aspect of the disclosure, the method may further include displaying, on the display, an indication to a user providing a suggestion on avoiding the potential collision based on the detection of the potential collision. In a further aspect of the disclosure, the method may further include detecting a patient in the real-world environment by the imaging device, displaying the detected patient by a display of the augmented reality device, determining a surgical port entry point in an abdominal portion of the displayed patient based on the composite view, and rendering the surgical port entry point in the abdominal portion of the displayed patient. In yet a further aspect of the disclosure, the method may further include generating an optimized robotic arm placement location based on the surgical port entry point. In an aspect of the disclosure, the surgical port entry point may be further based on a body habitus of the patient. In yet a further aspect of the disclosure, the method may further include rendering a visual overlay on the patient and/or the first virtual object. In another aspect of the disclosure, the method may further include capturing an arm of a user, displaying the arm of the user, detecting a spatial location of the displayed arm of the user, and determining an interaction between the user and the first virtual object. In yet a further aspect of the disclosure, the method may further include moving the location of the first virtual object in the composite view based on the interaction between the user and the first virtual object. In accordance with aspects of the disclosure, a system for clinical workspace simulation includes an augmented reality headset including an imaging device configured to capture images of a real-world environment, a display configured to display a composite view, a processor, and a memory. The memory includes instructions stored thereon, which, when executed by the processor, cause the system to capture a real-world environment by the imaging device of the augmented reality headset, generate a composite view by rendering a first virtual object relative to a surgical table in the real-world environment and combining the captured real-world environment and the rendered first virtual object. The instructions, when executed by the processor, further cause the system to display the composite view on the display of the augmented reality headset. In yet another aspect of the disclosure, the instructions, when executed by the processor, may further cause the system to render a second virtual object in the composite view and detect a po