CN-120949243-B - Underwater robot underwater positioning system based on man-machine interaction
Abstract
The application provides an underwater robot underwater positioning system based on man-machine interaction. The method comprises the steps of responding to a positioning request of a user, constructing a multi-dimensional sensor network of the underwater robot, adopting the multi-dimensional sensor network to dynamically collect multi-dimensional monitoring data of a target space along with a moving track of the underwater robot, adopting a multi-branch strategy network to preprocess the multi-dimensional monitoring data to obtain multi-dimensional data to be processed, adopting ORB-SLAM3 to generate a sparse feature map, adopting a point cloud registration ICP algorithm to generate a local dense map, fusing the sparse feature map, the empty local dense map and pose features in third data to be processed through an EKF to obtain a local scene map, adopting a segmented beam adjustment algorithm to segment the local scene map according to a motion mode, and outputting positioning information of the underwater robot to the user in real time based on the local scene map. The positioning accuracy of the underwater robot can be improved, and the positioning efficiency of the underwater robot is improved.
Inventors
- ZHENG CHENGDONG
- WANG YONGJIE
Assignees
- 佛山市顺德区一拓电气有限公司
Dates
- Publication Date
- 20260508
- Application Date
- 20250630
Claims (9)
- 1. An underwater robot underwater positioning method based on man-machine interaction is characterized by comprising the following steps: The method comprises the steps of responding to a positioning request of a user, constructing a multi-dimensional sensor network of the underwater robot, dynamically acquiring multi-dimensional monitoring data of a target space along with a moving track of the underwater robot by adopting the multi-dimensional sensor network, wherein the multi-dimensional monitoring data at least comprise sonar echo signals aiming at a dynamic object, a pool wall and/or a pool bottom, space visual information acquired by a visual sensor and high-frequency motion data acquired by an inertial sensor, wherein the sonar echo signals are acquired by a high-frequency sonar module; The method comprises the steps of preprocessing multi-dimensional monitoring data by adopting a multi-branch strategy network to obtain multi-dimensional data to be processed, wherein in a first branch, a Brown-Conrady model is adopted to map distorted pixel coordinates in spatial vision information back to ideal coordinates through polynomial fitting distortion coefficients to realize de-distortion processing, in the first branch, the de-distorted spatial vision information is subjected to dynamic target detection through a lightweight YOLOv8 network to obtain first data to be processed, the first data to be processed comprises the de-distorted spatial vision information and a dynamic mask for calibrating the position of a dynamic target, in a second branch, multipath suppression is carried out on a sonar echo signal by adopting a time-sharing strategy network to obtain second data to be processed, and in a third branch, motion distortion in the high-frequency motion data is processed by adopting quaternary integral compensation, and a sensor noise reduction model based on Kalman filtering is adopted to remove bit offset in the high-frequency motion data, so that the third data to be processed is obtained based on the first data to be processed, and the third data to be processed comprises at least one piece of multi-dimensional data to be processed, namely the first data to be processed is obtained based on the first data to be processed; Generating a sparse feature map by adopting ORB-SLAM3 to the first data to be processed, and generating a local dense map by adopting a point cloud registration ICP algorithm to the second data to be processed; fusing the sparse feature map, the local dense map and the pose features in the third data to be processed through an extended Kalman filter EKF to obtain a local scene map containing the pose state of the underwater robot; and adopting a sectional beam adjustment algorithm SEGMENTED BA to perform sectional optimization on the local scene map according to a motion mode, and outputting positioning information of the underwater robot to a user in real time based on the optimized local scene map.
- 2. The method for underwater positioning of an underwater robot based on man-machine interaction according to claim 1, wherein in the second branch, multipath suppression is performed on the sonar echo signal by using a time-sharing policy network, and multipath artifacts in the sonar echo signal are removed to obtain second data to be processed, comprising: Predicting the density of the movable object in the target area in the current period based on the number of entrances of the target area, wherein the current period belongs to a peak period if the density of the movable object is greater than a set threshold value, and belongs to a flat peak period if the density of the movable object is not greater than the set threshold value; For peak time, adopting a graph convolution GCN (generalized communication network) based on wavelet transformation to separate a direct signal and an interference signal from the sonar echo signal, and taking the direct signal as second data to be processed; And for the flat peak period, generating an countermeasure network PC-GAN based on physical constraint, removing noise signals contained in the sonar echo signals through a generator of a U-Net architecture, and predicting the authenticity of the output signals of the generator by adopting a physical prior model introducing sonar propagation and a GAN discriminator so as to obtain second to-be-processed data.
- 3. The underwater positioning method of the underwater robot based on man-machine interaction according to claim 2, wherein the adopting the graph-convolution GCN hybrid network based on wavelet transformation separates the direct signal and the interference signal from the sonar echo signal, and uses the direct signal as the second data to be processed comprises: decomposing the sonar echo signal into multiple frequency sub-bands by wavelet transformation; extracting geometric topological relation in a sonar matrix through GCN modeling of a sonar matrix corresponding to the low-frequency sub-band to obtain low-frequency spatial characteristics for capturing spatial consistency of direct signals; Extracting transient characteristics in a high-frequency sub-band through 1D-CNN to obtain high-frequency time-frequency characteristics for capturing pulse fluctuation caused by multipath random interference; and carrying out fusion splicing on the low-frequency space features and the high-frequency time-frequency features, adopting a classifier to carry out classification processing on the spliced features so as to obtain direct signals and interference signals by separation, and taking the direct signals as second data to be processed.
- 4. The underwater robot underwater positioning method based on man-machine interaction according to claim 1, wherein the generating the sparse feature map by using ORB-SLAM3 for the first data to be processed includes: Converting each frame of original image in the first data to be processed into an ORB-SLAM3 processing format, calibrating camera parameters of a vision sensor, and eliminating fish eyes or pinhole distortion by using undistort functions to obtain an intermediate image; detecting image corner points of each frame of intermediate image by using a FAST-9 algorithm, calculating gradient directions of each image corner point by using a gray centroid method to enhance rotation invariance, and generating 256-bit binary BRIEF descriptors based on calculation results to serve as robust feature points of each frame of intermediate image; When ghost errors between continuous multi-frame intermediate images exceed a threshold value or the motion amplitude of the underwater robot is larger than the threshold value, selecting key frames from the continuous multi-frame intermediate images according to preset intervals, and establishing relevance among a plurality of key frames; Clustering robust feature points of a current frame into visual words through a word bag model, performing feature matching with historical visual words in historical key frames with relevance stored in a hash table, and selecting a preset number of candidate map feature points according to the sequence of the matching degree from high to low; Estimating the relative pose of the current frame and the historical key frame through EPnP algorithm, calculating the three-dimensional coordinates of the selected candidate map feature points based on epipolar geometric constraint, and projecting the three-dimensional coordinates into a global coordinate system to obtain the three-dimensional coordinates of the target map feature points so as to construct the sparse feature map according to a preset data structure.
- 5. The underwater robot underwater positioning method based on man-machine interaction according to claim 1, wherein the generating the local dense map by the second data to be processed by adopting a point cloud registration ICP algorithm comprises: removing point clouds with sonar echo intensities lower than a dynamic noise threshold value in the current frame of point clouds for each frame of sonar point clouds in the second data to be processed; calculating the normal direction of each point in the current frame point cloud, and matching the similar area between the current frame point cloud and the historical frame point cloud through a normal included angle; extracting local curvature of the point cloud of the current frame as texture features for distinguishing a smooth region and an edge region in a target space; accelerating nearest neighbor search by KD-Tree, and acquiring a point cloud pair with similar matching characteristics from the current frame point cloud and the historical frame point cloud according to the normal direction and the texture characteristics; and aligning the point cloud of the current frame with the point cloud of the historical frame by adopting the screened point cloud pairs, and constructing the local dense map by using the aligned point cloud pairs.
- 6. The underwater positioning method of underwater robots based on man-machine interaction according to claim 4 or 5, characterized in that the sparse feature map is used for indicating global pose references and position information references of the underwater robot, and the local dense map is used for supplementing local position details in the area where the underwater robot is located; when the underwater robot is in a motion state, the dynamic weight of the local dense map in the EKF fusion process is higher than that of the sparse feature map; when the underwater robot is in a static state, the dynamic weight of the local dense map in the EKF fusion process is lower than that of the sparse feature map.
- 7. The underwater positioning method of the underwater robot based on man-machine interaction according to claim 1, wherein the adoption of the segmented beam adjustment algorithm SEGMENTED BA to segment and optimize the local scene map according to the motion mode comprises the following steps: based on the motion mode parameters of the underwater robot, identifying the current motion state of the underwater robot as a straight section or a turning section; And if the local scene map is in a turning section, optimizing the relative pose of all frames and all map feature points by adopting a robust kernel function of dense BA so as to finish the segmentation optimization of the local scene map.
- 8. The underwater positioning method of the underwater robot based on man-machine interaction according to claim 7, wherein the outputting the positioning information of the underwater robot to the user in real time based on the optimized local scene map comprises: Determining whether the currently requested positioning data is short-term data or long-term data; If short-term data is requested, a particle filtering resampling strategy is adopted, and real-time position coordinates and real-time poses of the underwater robot are extracted from the optimized local scene map; and if long-term data is requested, extracting global correction position coordinates and weighted average pose of the underwater robot from the optimized local scene map by adopting a factor graph optimization strategy.
- 9. An underwater robot underwater positioning system based on man-machine interaction, the system comprising: the system comprises a construction module, a multi-dimensional sensor network, a dynamic sensor module and an inertial sensor, wherein the construction module is used for responding to a positioning request of a user and constructing a multi-dimensional sensor network of the underwater robot; The multi-dimensional data to be processed at least comprises first data to be processed, second data to be processed, and third data to be processed, wherein the first data to be processed is obtained based on the space vision information, the second data to be processed is obtained based on the sonar echo signal, and the third data to be processed is obtained based on the high-frequency motion data; The preprocessing module is specifically used for mapping distorted pixel coordinates in the spatial vision information back to ideal coordinates through polynomial fitting distortion coefficients in a first branch to realize distortion removal processing when multi-dimensional data to be processed are obtained, the first branch is used for carrying out dynamic target detection on the distortion-removed spatial vision information through a lightweight YOLOv network to obtain first data to be processed, the first data to be processed comprises the distortion-removed spatial vision information and a dynamic mask for calibrating the position of a dynamic target, the second branch is used for carrying out multipath inhibition on the sonar echo signal through a time-sharing strategy network to obtain second data to be processed, and the third branch is used for carrying out quaternion integral compensation processing on the motion distortion in the high-frequency motion data and removing position offset in the high-frequency motion data through a Kalman filtering-based sensor noise reduction model to obtain third data to be processed; The prediction module is used for generating a sparse feature map by using ORB-SLAM3 for the first data to be processed, and generating a local dense map by using a point cloud registration ICP algorithm for the second data to be processed; And the output module is used for carrying out sectional optimization on the local scene map according to a motion mode by adopting a sectional beam adjustment algorithm SEGMENTED BA, and outputting positioning information of the underwater robot to a user in real time based on the optimized local scene map.
Description
Underwater robot underwater positioning system based on man-machine interaction Technical Field The embodiment of the application relates to the field of data processing, in particular to an underwater positioning system of an underwater robot based on man-machine interaction. Background The underwater robot is intelligent equipment capable of executing specific tasks in an underwater environment autonomously or through remote control, and the core technology of the underwater robot is integrated with multi-disciplinary technologies such as mechanical engineering, electronic communication, artificial intelligence, ocean science and the like. Taking a swimming pool as an example, the swimming pool is taken as a high dynamic environment with limited space, and the uniqueness of the space dimension and the activity characteristic of the swimming pool is more stringent requirements on a positioning system of an underwater robot. The space of a swimming pool is usually limited, the length of the swimming pool is usually 20-50 meters, and the width is 10-25 meters, however, the underwater robot needs to finish fine operations such as welt cleaning, obstacle recognition or fixed-point inspection in the range. Such limited spatial characteristics are extremely demanding in terms of positioning accuracy. The underwater robot needs to maintain accurate perception of the position in an environment close to the pool wall, close to other devices such as underwater lamps and water outlets or coexisting with other dynamic objects (such as swimmers), otherwise, the positioning deviation may cause safety accidents such as task failure or collision. Disclosure of Invention In this context, the embodiment of the application is expected to provide an underwater robot underwater positioning system based on man-machine interaction, which can improve the positioning accuracy of the underwater robot and the positioning efficiency of the underwater robot. In a first aspect of the embodiment of the present application, there is provided an underwater positioning method for an underwater robot based on man-machine interaction, including: The method comprises the steps of responding to a positioning request of a user, constructing a multi-dimensional sensor network of the underwater robot, dynamically acquiring multi-dimensional monitoring data of a target space along with a moving track of the underwater robot by adopting the multi-dimensional sensor network, wherein the multi-dimensional monitoring data at least comprise sonar echo signals aiming at a dynamic object, a pool wall and/or a pool bottom, space visual information acquired by a visual sensor and high-frequency motion data acquired by an inertial sensor, wherein the sonar echo signals are acquired by a high-frequency sonar module; The multi-dimensional data to be processed at least comprises first data to be processed, second data to be processed, and third data to be processed, wherein the first data to be processed is obtained based on the space vision information, the second data to be processed is obtained based on the sonar echo signal, and the third data to be processed is obtained based on the high-frequency motion data; Generating a sparse feature map by adopting ORB-SLAM3 to the first data to be processed, and generating a local dense map by adopting a point cloud registration ICP algorithm to the second data to be processed; fusing the sparse feature map, the empty local dense map and the pose features in the third to-be-processed data through an extended Kalman filter EKF to obtain a local scene map containing the pose state of the underwater robot; and adopting a sectional beam adjustment algorithm SEGMENTED BA to perform sectional optimization on the local scene map according to a motion mode, and outputting positioning information of the underwater robot to a user in real time based on the optimized local scene map. In a second aspect of the embodiment of the present application, there is provided an underwater positioning system for an underwater robot based on man-machine interaction, including: the system comprises a construction module, a multi-dimensional sensor network, a dynamic sensor module and an inertial sensor, wherein the construction module is used for responding to a positioning request of a user and constructing a multi-dimensional sensor network of the underwater robot; The multi-dimensional data to be processed at least comprises first data to be processed, second data to be processed, and third data to be processed, wherein the first data to be processed is obtained based on the space vision information, the second data to be processed is obtained based on the sonar echo signal, and the third data to be processed is obtained based on the high-frequency motion data; The prediction module is used for generating a sparse feature map by using ORB-SLAM3 for the first data to be processed, and generating a local dense map by using a point cloud r