Search

CN-122008270-A - Robot multi-source data fusion sensing method, chip and electronic equipment

CN122008270ACN 122008270 ACN122008270 ACN 122008270ACN-122008270-A

Abstract

The invention relates to the technical field of robot perception, in particular to a robot multi-source data fusion perception method, a chip and electronic equipment. The technical scheme includes that the method comprises the steps of collecting surrounding environment through a multi-mode sensor array and preprocessing standardized environment data. And carrying out confidence quantification on the standardized environmental data to obtain the confidence. The confidence coefficient is greater than the corresponding standard confidence coefficient threshold value and is marked as effective data. And carrying out multi-source data fusion on the effective data to obtain the optimal state data. And calculating through the optimal state data to obtain the comprehensive deviation value. Judging whether the comprehensive deviation value is within the deviation threshold range, if so, confirming that the result is effective and outputting the result to the robot motion controller, and if not, triggering a feedback adjustment mechanism. The invention can effectively make up the limitation of a single sensor, greatly reduces the mean value of the fused environmental data errors, remarkably improves the gesture estimation precision, and can accurately identify the obstacle, the target object and the environmental change in the complex scene.

Inventors

  • GU YUCONG
  • ZHUANG YETAO

Assignees

  • 杭州智芯科微电子科技有限公司

Dates

Publication Date
20260512
Application Date
20260206

Claims (10)

  1. 1. The robot multi-source data fusion sensing method is characterized by comprising the following steps of: S1, acquiring surrounding environment through a multi-mode sensor array to obtain original data, and preprocessing the original data to obtain standardized environment data; S2, carrying out confidence quantification on the standardized environmental data through the constructed multi-dimensional data quality evaluation model to obtain a confidence G i ; s3, judging whether each confidence coefficient G i is larger than a corresponding standard confidence coefficient threshold value G i , if so, calibrating the standardized environmental data to be effective data, and if not, judging that the standardized environmental data is low-quality data; S4, carrying out multi-source data fusion on the effective data to obtain optimal state data; s5, carrying out consistency check on the optimal state data and the historical data which are output through fusion, comparing continuous multi-frame data through a sliding window mechanism, and calculating to obtain a comprehensive deviation value; S6, judging whether the comprehensive deviation value is within a deviation threshold range, if so, confirming that the result is effective, and if not, triggering a feedback adjustment mechanism.
  2. 2. The method of claim 1, wherein the sensor array comprises a vision sensor, an inertial measurement unit, a lidar and/or a force sensor.
  3. 3. The method of claim 1, wherein the preprocessing of the raw data comprises time synchronization, spatial alignment, noise filtering and data normalization.
  4. 4. The method of claim 1, wherein the evaluation dimensions of the multi-dimensional data quality evaluation model include data integrity, signal-to-noise ratio, feature discrimination, and time-series stability.
  5. 5. The method for sensing the multi-source data fusion of the robot is characterized in that the multi-dimensional data quality assessment model is formed by firstly distributing weight of each dimension based on sensor characteristics and application scenes, and then calculating to obtain acquisition coefficients G i j of each dimension, wherein i is a standardized environmental data number, and j is a dimension number.
  6. 6. The robot multisource data fusion perception method according to claim 1 is characterized in that a standard confidence threshold g i is obtained by presetting a reference threshold g i ' according to an operation scene, calculating through environment characteristics to obtain a complexity coefficient, selecting fusion results with the confidence level being more than or equal to 0.8 in nearly 100 frames as a reference frame set S, calculating the mean value of all environment state dimensions in the S as a calibration reference value Xr, marking the fusion results of nearly 100 frames as Xm, m as the number of data frames, calculating the relative deviation between each frame and the calibration reference value and the fusion deviation delta t, if delta t is larger than 3%, presetting the reference threshold g i ' to be up-regulated by 0.05, if delta t is smaller than or equal to 3%, enabling the fusion accuracy to reach the standard, maintaining the current threshold unchanged, and if delta t is smaller than 1%, presetting the reference threshold g i ', setting the standard confidence level after calibration to be limited in a range of [0.4,0.9] as the standard confidence threshold g i .
  7. 7. The method for acquiring the complexity coefficient by fusing the multi-source data of the robot according to claim 6 is characterized by comprising the steps of selecting standardized environment data of the previous 3 frames, representing core dimensions by focusing environments, calculating variances of the previous 3 frames of data for each core dimension, carrying out weighted summation on the variances of all the core dimensions to obtain comprehensive fluctuation variances of the previous 3 frames of data, and mapping the comprehensive fluctuation variances to [ -0.2,0.3] intervals to acquire the complexity coefficient.
  8. 8. The robot multi-source data fusion perception method according to claim 1 is characterized in that optimal state data is specifically obtained by adopting a local feature fusion-global state fusion method, a local feature fusion layer comprises (1) vision-laser radar feature fusion, (2) IMU-odometer feature fusion, (3) force sense feature extraction, and a global state fusion layer comprises global environment state vector construction, wherein each local fusion result is integrated, a state vector X is defined, contribution is obtained based on confidence calculation, and the current frame state is predicted through a state transition matrix A based on the last frame of optimal state data X k-1 And a prediction covariance matrix Each local fusion result is taken as an observation value Z k , an observation matrix H is determined by combining the contribution degree, and an intermediate coefficient is calculated Finally updating to obtain the optimal state data 。
  9. 9. A chip for performing the robotic multisource data fusion awareness method of any one of claims 1-8.
  10. 10. An electronic device, characterized in that, the electronic device comprising the chip of claim 9.

Description

Robot multi-source data fusion sensing method, chip and electronic equipment Technical Field The invention relates to the technical field of robot perception, in particular to a robot multi-source data fusion perception method, a chip and electronic equipment. Background With the rapid development of robot technology, robots are widely applied to various fields such as industrial manufacturing, smart home, outdoor exploration, service distribution and the like. In the process of executing tasks, the robot needs to acquire environmental information through various sensors, so that functions of autonomous navigation, target recognition, obstacle avoidance and the like are realized, and the accuracy and instantaneity of environmental perception directly determine the action flexibility and environmental adaptability of the robot. The existing robot sensing system is mainly used for acquiring environmental data by adopting a single sensor or a simple data superposition mode, and has obvious limitations that a visual sensor is easily influenced by factors such as illumination change, shielding and the like to lead to target feature extraction distortion, an Inertial Measurement Unit (IMU) has accumulated errors, gesture drifting can occur when working alone for a long time, although the range finding precision of a laser radar is high, the matching difficulty of feature points is high in a complex texture environment, the data redundancy is high, and a force sensor can only feed back a contact state and cannot acquire global environmental information. The limitation of single sensor data causes that the robot is easy to generate sensing deviation in a complex dynamic environment, thereby causing problems of action blocking, decision error and the like, and severely restricting the application of the robot in high-precision and high-dynamic scenes. Disclosure of Invention In order to solve the technical problems, the invention provides a multi-source data fusion sensing method for a robot, which realizes accurate synchronization, quality assessment and intelligent fusion of multi-type sensor data by constructing a self-adaptive fusion framework, outputs high-precision environment data, provides reliable support for robot action planning, and remarkably improves flexibility and environmental adaptability of the robot. The robot multi-source data fusion sensing method comprises the following steps: s1, acquiring surrounding environment through a multi-mode sensor array to obtain original data, and preprocessing the original data to obtain standardized environment data. S2, carrying out confidence quantification on the standardized environmental data through the constructed multidimensional data quality evaluation model to obtain the confidence. S3, judging whether each confidence coefficient G i is larger than a corresponding standard confidence coefficient threshold value G i, if so, calibrating the standardized environmental data into effective data and marking a weight coefficient, and if not, judging that the standardized environmental data is low-quality data, reducing the weight (tentatively to be the effective data) or temporarily shielding the channel data, so as to avoid error amplification. S4, carrying out multi-source data fusion on the effective data to obtain the optimal state data. S5, carrying out consistency check on the optimal state data and the historical data which are fused and output, comparing continuous multi-frame data through a sliding window mechanism, and calculating to obtain a comprehensive deviation value. S6, judging whether the comprehensive deviation value is within a deviation threshold range, if so, confirming that the result is effective and outputting the result to the robot motion controller, and if not, triggering a feedback adjustment mechanism. Preferably, the sensor array comprises visual sensors, inertial measurement units, lidar and/or force sensors. Preferably, the raw data preprocessing includes time synchronization, spatial alignment, noise filtering, and data normalization. Preferably, the time synchronization adopts a hardware interrupt trigger mechanism, generates a unified microsecond time stamp, and calibrates the sampling moments of the multi-mode sensor array to synchronize the sampling moments, so that the time sequence deviation is eliminated. Preferably, the space alignment comprises the steps of constructing a robot base coordinate system, obtaining a coordinate system where original data are located, and then translating coordinate systems of different sensors based on a preset external parameter matrix of the multi-mode sensor array to the robot base coordinate system to achieve space position matching. The method is characterized in that noise filtering adopts a corresponding algorithm aiming at different sensor characteristics, visual data adopts Gaussian filtering to remove noise, IMU data adopts moving average filtering to inhibit vibration interfere