CN-115281841-B - Robot autonomous cutting method and system oriented to in-vivo flexible dynamic environment
Abstract
The invention provides a robot autonomous cutting method oriented to an in-vivo flexible dynamic environment, and relates to the field of robot autonomous cutting. Comprises that under the complex dynamic environment in vivo, the cutting path and the key tissue area can be accurately tracked, and the cutting accuracy is ensured. The method comprises the steps of constructing a general linear control system, establishing a state equation, respectively designing target controllers according to the realization requirements of each target in a laparoscopic surgery scene, calculating target gradients of the target evaluation function accumulated values of each controller at the current moment, sequentially embedding and fusing each target gradient value from low to high according to a preset weight function and a weight hierarchy sequence, adding the target gradient value into the general control input of the system, converting the fused movement speed and the fused speed of the posture change into joint angles of a surgical robot, realizing autonomous cutting operation of the robot, meeting the requirements of multi-target control, ensuring the safety, the accuracy and the high efficiency of surgical cutting, and getting rid of the dependence of a traditional cutting mode on doctor experience and operation skills.
Inventors
- LI XIAOJIAN
- XIAO XILIN
- TANG HUA
- OUYANG BO
- FANG JIN
- LI LING
Assignees
- 合肥工业大学
- 上海长征医院
Dates
- Publication Date
- 20260505
- Application Date
- 20220704
Claims (9)
- 1. An in-vivo flexible dynamic environment-oriented robot autonomous cutting method is characterized by comprising the following steps: s1, reading a laparoscope image, acquiring a cutting path of a two-dimensional mark on an image frame and an end point of a cutting target according to the selection of a doctor, and positioning the cutting path and the end point of the cutting target in a three-dimensional image; S2, constructing a general linear control system, establishing a state equation, and respectively designing target controllers aiming at each target realization requirement in a laparoscopic surgery scene, wherein the target controllers comprise a planning path tracking controller, a target guiding controller, a cutting depth limiting controller and a collision avoidance controller; s3, respectively establishing a corresponding motion control prediction model and a corresponding target evaluation function aiming at the planning path tracking controller, the target guiding controller, the cutting depth limiting controller and the collision avoidance controller, estimating the motion state of a future period of prediction time interval on the basis of the system motion state at the current moment, and calculating a corresponding target evaluation function accumulated value; S4, respectively calculating target gradients of the target evaluation function accumulated values of the controllers at the current moment, sequentially nesting and fusing the target gradient values corresponding to the controllers from low to high according to a preset weight function and a weight hierarchy sequence, and adding the target gradient values into the total control input of the system; S5, converting the fused movement speed and the speed of posture change into joint angles of the surgical robot; the step S3 comprises the following steps: S31, respectively establishing corresponding motion control prediction models aiming at a planned path tracking controller, a target guiding controller, a cutting depth limiting controller and a collision avoidance controller; Wherein, the Controller for predicting t time in interval Is used to predict the state of motion of the vehicle, Controller for predicting t time in interval Is used for the prediction of the movement speed of the vehicle, Control target for t moment in prediction interval A, B, C, D is the calculated parameter of the state equation; is a controller Is a function of the total control output of the system at time t and the desired control target; s32, respectively establishing target evaluation functions aiming at a planning path tracking controller, a target guiding controller, a cutting depth limiting controller and a collision avoidance controller, and combining the corresponding motion control prediction model at the current moment Estimating the motion state of a future period of prediction time interval on the basis of the system motion state, and calculating a corresponding target evaluation function accumulated value; Wherein, the Is the first Personal controller The cumulative value of the objective function at the moment in time, Is the first Individual controller prediction interval The target evaluation function value of the inner single time point, Is a predicted time interval; The expected value at the time of the control target t; the step S4 comprises the following steps: S41, respectively calculating the current state of each controller target evaluation function in an optimized mode A time-of-day falling gradient; Wherein, the 、 、 、 Respectively a planned path tracking controller, a target guiding controller, a cutting depth limiting controller and a collision avoidance controller A time objective evaluation function descent gradient; S42, inputting parameters of each control target and the importance degree of the control target, and sequencing the controllers according to the importance degree of the targets to determine the priority of each controller; Wherein, the Is a weight hierarchy sequence of the target controller; s43, sequentially performing nested calculation on the fused target gradient values from low to high according to the weight level, and acquiring the target gradient values of the 4 controllers after nested fusion; Wherein, the Relates to gradient Is used as a normalization function of the (c), The parameters of the hierarchy are represented by, Representing target gradient values after 4 controllers are nested and fused; S44, adding the target gradient values after nested fusion to the fused controllers, thereby realizing the motion fusion of a plurality of different target motion controllers; Where K is the scaling factor.
- 2. The robotic autonomous cutting method of claim 1, wherein S1 specifically comprises: s11, reading a laparoscope image, and marking a planned cutting path S, a cutting target end point O and a critical tissue area A to be avoided on an initial frame image; S12, positioning a two-dimensional marked cutting path S, an end point O of a cutting target and a key tissue area A into a three-dimensional image by adopting a three-dimensional curve tracking method to obtain a planning track Sd which dynamically changes along with the image, the end point Od of the cutting target and a key tissue area Ad; s13, setting a cutting depth limiting plane Dd on a plane where the planned track is located according to the selection of a doctor; s14, measuring a first conversion matrix from the tail end of the instrument to the robot coordinate system according to the optical positioning instrument, measuring a conversion matrix from the three-dimensional image of the laparoscope to the robot coordinate system according to the camera model and the optical positioning instrument, unifying all points to the robot base coordinate system through the first and second coordinate conversion matrices, and obtaining a planned track as The end point of the cutting target is The key tissue area is The cutting depth limiting plane is 。
- 3. The robotic autonomous cutting method of claim 2, wherein S2 comprises: S21, constructing a general linear control system and establishing a state equation; Wherein, the For the state of motion at the moment of the system t, For the speed of movement at the moment of the system t, For the total control input of the system at time t, The total control output of the system at the time t is A, B, C, D which is the calculation parameter of the state equation; S22, respectively designing target controllers aiming at the realization requirements of all targets in a laparoscopic surgery scene, wherein the target controllers comprise a planning path tracking controller, a target guiding controller, a cutting depth limiting controller and a collision avoidance controller; Wherein, the Is a controller Is a function of the total control output of the system at time t and the desired control objective, To control the desired value at the target time t.
- 4. The robotic autonomous cutting method of claim 3, wherein, The planned path tracking controller specifically refers to: Wherein, the To program the inputs of the path tracking controller, 、 To program the proportional and differential coefficients controlled by the path tracking controller PD, Is that The motion state of the path tracking controller is planned at the moment, Deviation of the expected position of the object controlled by the path tracking controller and the total control output of the system is planned; the target guidance controller specifically refers to: Wherein, the The input of the controller is directed to the target, 、 The proportional and differential coefficients controlled for the target pilot controller PD, In order to cut the end point position of the object, The speed at which the object is approaching the target is directed by the controller for the target, Directing for the target a time at which the controller desires to complete the cutting task; The cutting depth limit setting controller specifically refers to: Wherein, the To limit the input to the controller for the depth of cut, 、 For the proportional and differential coefficients controlled by the cut depth limitation controller PD, For the point on the depth of cut limiting plane closest to the total control output, For the deviation of the cutting depth limiting plane of the object controlled by the cutting depth limiting controller from the total control output of the system, Is that And (3) with An included angle of the vectors; The collision avoidance control setting controller specifically refers to: Wherein, the For the input of the collision avoidance controller, 、 For the proportional coefficient and the differential coefficient controlled by the collision avoidance controller PD, The position of the central point of the key tissue region at the moment t, In order to take the center point of the obstacle as the radius of the sphere collision detection area, In order to take the center point of the obstacle as the radius of the sphere collision avoidance area, cd (t) is the distance between the total control output of the system and the center point of the obstacle, Is a very small constant.
- 5. The robotic autonomous cutting method of claim 4, wherein in S32: establishing a target evaluation function aiming at a track planning tracking control target, and evaluating the effectiveness of a planning path tracking controller in a prediction time interval: Wherein, the Is a planned path tracking controller The objective function value of the moment in time, Is to plan the path tracking controller in the prediction interval To the point of Target evaluation function value at single time point in +T, The prediction control output of the path tracking control target is planned at the moment t in the prediction interval; Establishing a target evaluation function aiming at a target guidance control target, and evaluating the effectiveness of the target guidance controller in a prediction time interval: Wherein, the Is a target boot controller The objective function value of the moment in time, Is that the target guidance controller is in the prediction interval To the point of Target evaluation function value at single time point in +T, The prediction control output of the target guiding control target at the time t in the prediction interval; Establishing a target evaluation function aiming at a cutting depth limit control target, and evaluating the effectiveness of a cutting depth limit controller in a prediction interval; Wherein, the Is a cutting depth limiting controller The objective function value of the moment in time, Is that the cutting depth limit controller is in a prediction interval To the point of Target evaluation function value at single time point in +T, The prediction control output of the cutting depth limiting control target at the moment t in the prediction interval; Establishing a target evaluation function aiming at a collision avoidance control target, and evaluating the effectiveness of a collision avoidance controller in a prediction interval; Wherein, the Is a collision avoidance controller The objective function value of the moment in time, Is that the collision avoidance controller is in a prediction interval To the point of Target evaluation function value at single time point in +T, Is a predictive control output of a collision avoidance control target at a time t in a predictive interval, Is a very small constant.
- 6. The autonomous cutting method of a robot according to any one of claims 3 to 5, wherein the converting the speed of movement and the speed of posture change after fusion into the joint angle of the surgical robot in S5 specifically means: Wherein, the Representing the point of the RCM that is fixed, An X-axis vector representing the pose matrix, A Y-axis vector representing the pose matrix, A Z-axis vector representing the pose matrix, For the gesture of robot movement, function Representing the transformation of the attitude rotation matrix into euler angles in a cartesian coordinate system, As euler angles in a cartesian coordinate system, For the rate of change of the euler angle, Is the rotation speed of the joint angle of the robot, Is a jacobian matrix.
- 7. An in-vivo flexible dynamic environment-oriented robotic autonomous cutting system, configured to perform the robotic autonomous cutting method of any of claims 1-6, comprising: the marking module is used for reading the laparoscopic image, acquiring a cutting path of a two-dimensional mark on the image frame and an end point of a cutting target according to the selection of a doctor, and positioning the cutting path and the end point of the cutting target in the three-dimensional image; the system comprises a design module, a control module and a control module, wherein the design module is used for constructing an overall linear control system and establishing a state equation, and respectively designing target controllers aiming at the realization requirements of all targets in a laparoscopic surgery scene, wherein the target controllers comprise a planning path tracking controller, a target guiding controller, a cutting depth limiting controller and a collision avoidance controller; the prediction module is used for respectively establishing a corresponding motion control prediction model and a corresponding target evaluation function aiming at the planning path tracking controller, the target guiding controller, the cutting depth limiting controller and the collision avoidance controller, estimating the motion state of a future period of prediction time interval on the basis of the system motion state at the current moment, and calculating a corresponding target evaluation function accumulated value; The fusion module is used for respectively calculating the target gradient of the target evaluation function accumulated value of each controller at the current moment, sequentially embedding and fusing the target gradient values corresponding to each controller from low to high according to a preset weight function and a weight level sequence, and adding the target gradient values into the total control input of the system; The conversion module is used for converting the fused movement speed and the speed of posture change into joint angles of the surgical robot.
- 8. A storage medium, characterized in that it stores a computer program for autonomous cutting of a robot oriented in an in-vivo flexible dynamic environment, wherein the computer program causes a computer to execute the autonomous cutting method of a robot according to any one of claims 1 to 6.
- 9. An electronic device, comprising: One or more processors; memory, and One or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the programs comprising instructions for performing the robotic autonomous cutting method of any of claims 1-6.
Description
Robot autonomous cutting method and system oriented to in-vivo flexible dynamic environment Technical Field The invention relates to the technical field of autonomous cutting of robots, in particular to an autonomous cutting method, an autonomous cutting system, a storage medium and electronic equipment for robots facing in-vivo flexible dynamic environments. Background Minimally invasive robotic surgery is controlled by a surgeon's master hand, and a robotic arm is executed from the hand to perform the surgical cutting task. Taking a Davinci surgical robot as an example, the device has a direct-view three-dimensional high-definition visual field, an operation field image can be amplified by 10 times to 15 times, an intuitive control technology is adopted, a doctor action can be fed back to a surgical instrument, a sensing system automatically filters tremble, the operation stability of the doctor can be kept, the device has 7 degrees of freedom and can break through the limit of hands. The surgical cutting task in the minimally invasive robotic surgery is that a surgeon controls the slave mobile manipulator to move through a master hand, and controls the electrocoagulation switch through a foot pedal, so that the surgical instrument at the tail end of the manipulator can complete the surgical cutting task. The prior art proposes a conventional ophthalmic microscope system with a monocular camera for capturing a surgical scene, determining the position of the robot and estimating depth information. The kinematics with Remote Center of Motion (RCM) is designed for a multi-axis robot to perform the kerf path, completing the autonomous cutting task by PID trajectory tracking control. The experimental result of the isolated pig eye shows that the autonomous transparent cornea incision has a stricter three-plane structure and is closer to an ideal incision than the incision made by a surgeon. However, due to the dynamic complexity of in-vivo scenes and obvious in-vivo environmental differences among individuals, autonomous surgery completely separated from human control is still difficult to realize, most of the existing researches only search on semi-autonomy for specific scenes, and especially tissue deformation caused by cutting tasks can influence the accuracy of the robot to accurately track target points. Disclosure of Invention (One) solving the technical problems Aiming at the defects of the prior art, the invention provides an in-vivo flexible dynamic environment-oriented autonomous cutting method and system for a robot, a storage medium and electronic equipment, and solves the technical problem that tissue deformation caused by a cutting task can influence the accuracy of the robot to accurately track a target point. Technical proposal In order to achieve the above purpose, the invention is realized by the following technical scheme: an autonomous cutting method of a robot facing an in-vivo flexible dynamic environment comprises the following steps: s1, reading a laparoscope image, acquiring a cutting path of a two-dimensional mark on an image frame and an end point of a cutting target according to the selection of a doctor, and positioning the cutting path and the end point of the cutting target in a three-dimensional image; S2, constructing a general linear control system, establishing a state equation, and respectively designing target controllers aiming at each target realization requirement in a laparoscopic surgery scene, wherein the target controllers comprise a planning path tracking controller, a target guiding controller, a cutting depth limiting controller and a collision avoidance controller; s3, respectively establishing a corresponding motion control prediction model and a corresponding target evaluation function aiming at the planning path tracking controller, the target guiding controller, the cutting depth limiting controller and the collision avoidance controller, estimating the motion state of a future period of prediction time interval on the basis of the system motion state at the current moment, and calculating a corresponding target evaluation function accumulated value; S4, respectively calculating target gradients of the target evaluation function accumulated values of the controllers at the current moment, sequentially nesting and fusing the target gradient values corresponding to the controllers from low to high according to a preset weight function and a weight hierarchy sequence, and adding the target gradient values into the total control input of the system; S5, converting the fused movement speed and the speed of posture change into joint angles of the surgical robot. Preferably, the S1 specifically includes: s11, reading a laparoscope image, and marking a planned cutting path S, a cutting target end point O and a critical tissue area A to be avoided on an initial frame image; S12, positioning a two-dimensional marked cutting path S, an end point O of a cutting target an