Search

CN-122004931-A - Visual servo-based ultrasonic positioning method, system and storage medium

CN122004931ACN 122004931 ACN122004931 ACN 122004931ACN-122004931-A

Abstract

The invention provides an ultrasonic positioning method, an ultrasonic positioning system and a storage medium based on visual servo, wherein the method comprises the steps of acquiring RGB images and depth images at the current moment, contact force data at the current moment and the current actual pose of an ultrasonic probe; the method comprises the steps of obtaining a current target pose of an ultrasonic probe through a pre-trained multi-mode feature fusion pose estimation model based on an RGB image and a depth image at the current moment, generating a basic pose correction error of the ultrasonic probe according to pose errors of the current actual pose and the current target pose, calculating impedance compensation according to contact force data at the current moment and a preset reference contact force threshold value, and generating a target correction instruction according to the basic pose correction error to control the ultrasonic probe to reach the target pose. The invention effectively solves the technical problems of low positioning precision and low efficiency of the ultrasonic probe in the prior art.

Inventors

  • CHEN PENG
  • LIANG CHEN
  • CHEN XINYUAN
  • CHEN KUI
  • ZHANG ZHEMING
  • FAN PEIHUA
  • ZHANG BO

Assignees

  • 无锡艾米特智能医疗科技有限公司

Dates

Publication Date
20260512
Application Date
20251223

Claims (10)

  1. 1. An ultrasonic positioning method based on visual servoing, comprising: acquiring an RGB image and a depth image at the current moment, and acquiring contact force data at the current moment, wherein the current actual pose of an ultrasonic probe; Based on the RGB image and the depth image at the current moment, obtaining the current target pose of the ultrasonic probe through a pre-trained multi-mode feature fusion pose estimation model; Generating a basic pose correction error of the ultrasonic probe according to the pose errors of the current actual pose and the current target pose; And calculating impedance compensation according to the contact force data at the current moment and a preset reference contact force threshold value, and generating a target correction instruction by combining the basic pose correction error so as to control the ultrasonic probe to reach the target pose.
  2. 2. The visual servoing-based ultrasound positioning method of claim 1 wherein, The multi-mode feature fusion pose estimation model adopts a double-flow neural network structure; Wherein the RGB branch adopts a structure of removing a full connection layer ResNet-18, comprises a convolution layer, a pooling layer and four residual blocks, and outputs 512-dimensional feature vectors 。 The depth branch adopts PointNet ++ structure, which comprises a sampling layer, a grouping layer and a multi-layer perception layer, and outputs 512-dimensional feature vectors ; A feature fusion layer for fusing the feature vectors And the feature vector The formula is: Wherein, the , Is a learnable parameter; fused feature vectors After the position and pose regression is processed by position and pose regression, position translation vectors are respectively output And attitude rotation quaternion vector And estimating confidence ; The loss function of the pose regression processing is as follows: Wherein, the As a result of the true position, In order to be in a true posture, 、 、 Is a weight coefficient.
  3. 3. The visual servo-based ultrasonic positioning method according to claim 2, wherein the formula for generating the basic pose correction error of the ultrasonic probe is: Wherein, the As a result of the location of the object, For the pose of the object to be aimed, As the current location is to be determined, For the current pose to be the one, The representation takes the imaginary part of the quaternion. As an alternative embodiment, the formula for calculating the impedance compensation is: Wherein, the For the preset reference contact force threshold value, As a Z-axis measurement of the contact force at the current moment, In order to provide a difference in the contact force, As a function of the force gain factor, , Compensating for the speed of the impedance.
  4. 4. A visual servoing-based ultrasound positioning method according to claim 3, wherein said formula for calculating impedance compensation is: Wherein, the For the preset reference contact force threshold value, As a Z-axis measurement of the contact force at the current moment, In order to provide a difference in the contact force, As a function of the force gain factor, , Compensating for the speed of the impedance.
  5. 5. The visual servo-based ultrasound positioning method of claim 4, wherein the generating a target correction command by combining the basic pose correction error comprises the following formula: Wherein, the Is the pseudo-inverse of the jacobian matrix of the mechanical arm, In the form of a proportional gain matrix, To control the velocity vector.
  6. 6. The visual servoing-based ultrasound positioning method of claim 5 and further comprising: the visual servo-based ultrasonic positioning method further comprises the following steps: Detecting whether the contact force at the current moment exceeds a preset safety threshold, if yes, starting a gradient rollback mechanism, and linearly damping the control speed according to the proportion of the contact force difference value, wherein the formula is as follows: Wherein, the A start threshold value for a gradient rollback mechanism; and monitoring whether the linear speed and the angular speed in the control speed vector exceed a preset threshold value, performing amplitude limiting processing on the exceeded speed components, and synchronously monitoring acceleration change.
  7. 7. The method of claim 6, further comprising reserving a last time target pose if the estimated confidence level is below a preset threshold.
  8. 8. An ultrasonic positioning system based on visual servoing, comprising: the acquisition module is used for acquiring the RGB image and the depth image at the current moment, the contact force data at the current moment and the current actual pose of the ultrasonic probe; The estimation module is used for obtaining the current target pose of the ultrasonic probe through a pre-trained multi-mode feature fusion pose estimation model based on the RGB image and the depth image at the current moment; the basic module is used for generating a basic pose correction error of the ultrasonic probe according to the pose errors of the current actual pose and the current target pose; and the compensation module is used for calculating impedance compensation according to the contact force data at the current moment and a preset reference contact force threshold value, and generating a target correction instruction by combining the basic pose correction error so as to control the ultrasonic probe to reach the target pose.
  9. 9. A computer device comprising a memory and a processor, said memory and said processor being communicatively coupled to each other, said memory having stored therein computer instructions, said processor implementing the visual servoing based ultrasound positioning method of any of claims 1-7 by executing said computer instructions.
  10. 10. A computer readable storage medium storing computer instructions which, when executed by a processor, implement the visual servo based ultrasound positioning method according to any of claims 1-7.

Description

Visual servo-based ultrasonic positioning method, system and storage medium Technical Field The invention relates to the field of medical control, in particular to an ultrasonic positioning method, an ultrasonic positioning system and a storage medium based on visual servoing. Background Position-based visual servoing (Position-Based Visual Servoing, PBVS) is a technique for controlling the motion of a robot by visual information, and the core is to convert image features into three-dimensional spatial information, thereby controlling the end effector of the robot to achieve a target pose. The inventors have found that conventional position-based visual servoing typically relies on three-dimensional reconstruction of image feature points for localization in medical robotic applications. However, the surface of the soft tissue of the human body lacks stable physical characteristics such as bone edges and rigid edges, and tissues such as skin, muscle and the like of the contact area of the ultrasonic probe are easy to deform when being stressed, so that the characteristic points drift, and the positioning accuracy is affected. The above problems are to be solved. Disclosure of Invention In view of the above, the invention provides an ultrasonic positioning method, an ultrasonic positioning system and a storage medium based on visual servoing, so as to solve the technical problems of low positioning precision and low efficiency of an ultrasonic probe in the prior art. In a first aspect, the present invention provides an ultrasonic positioning method based on visual servoing, comprising: acquiring an RGB image and a depth image at the current moment, and acquiring contact force data at the current moment, wherein the current actual pose of an ultrasonic probe; Based on the RGB image and the depth image at the current moment, obtaining the current target pose of the ultrasonic probe through a pre-trained multi-mode feature fusion pose estimation model; Generating a basic pose correction error of the ultrasonic probe according to the pose errors of the current actual pose and the current target pose; And calculating impedance compensation according to the contact force data at the current moment and a preset reference contact force threshold value, and generating a target correction instruction by combining the basic pose correction error so as to control the ultrasonic probe to reach the target pose. As an optional implementation manner, the multi-mode feature fusion pose estimation model adopts a double-flow neural network structure; Wherein the RGB branch adopts a structure of removing a full connection layer ResNet-18, comprises a convolution layer, a pooling layer and four residual blocks, and outputs 512-dimensional feature vectors 。 The depth branch adopts PointNet ++ structure, which comprises a sampling layer, a grouping layer and a multi-layer perception layer, and outputs 512-dimensional feature vectors; A feature fusion layer for fusing the feature vectorsAnd the feature vectorThe formula is: Wherein, the ,Is a learnable parameter; fused feature vectors After the position and pose regression is processed by position and pose regression, position translation vectors are respectively outputAnd attitude rotation quaternion vectorAnd estimating confidence; The loss function of the pose regression processing is as follows: Wherein, the As a result of the true position,In order to be in a true posture,、、Is a weight coefficient. As an optional implementation manner, the formula for generating the basic pose correction error of the ultrasonic probe is as follows: Wherein, the As a result of the location of the object,For the pose of the object to be aimed,As the current location is to be determined,For the current pose to be the one,The representation takes the imaginary part of the quaternion. As an alternative embodiment, the formula for calculating the impedance compensation is: Wherein, the For the preset reference contact force threshold value,As a Z-axis measurement of the contact force at the current moment,In order to provide a difference in the contact force,As a function of the force gain factor,,Compensating for the speed of the impedance. As an optional implementation manner, the generating a target correction instruction by combining the basic pose correction error comprises the following formula: Wherein, the Is the pseudo-inverse of the jacobian matrix of the mechanical arm,In the form of a proportional gain matrix,To control the velocity vector. As an optional embodiment, the visual servo-based ultrasonic positioning method further includes: Detecting whether the contact force at the current moment exceeds a preset safety threshold, if yes, starting a gradient rollback mechanism, and linearly damping the control speed according to the proportion of the contact force difference value, wherein the formula is as follows: Wherein, the A start threshold value for a gradient rollback mechanism