Search

CN-121982251-A - Oversized target reconstruction and robot AI sorting system based on multi-view 3D point cloud fusion

CN121982251ACN 121982251 ACN121982251 ACN 121982251ACN-121982251-A

Abstract

The invention discloses an oversized target reconstruction and robot AI sorting system based on multi-mesh 3D point cloud fusion, and particularly relates to the technical field of intersection of industrial automation and three-dimensional machine vision. The invention eliminates the view angle shielding through multi-sensor surrounding synchronous acquisition, and realizes the full coverage of the oversized target. And dynamically fusing according to the density of the point cloud and the confidence coefficient, and improving the data integrity of the light reflection weak texture region. And the segmentation components are reconstructed in parallel and then spliced, so that the calculation complexity of the ultra-large model is reduced. Coarse registration and multi-mode attention fine correction are combined, and speed and sub-centimeter positioning accuracy are considered. And the sensor drift is compensated on line by using a timing model, so that the stability of the sensor in a vibration temperature change environment is enhanced. And detecting the planned track based on the dense map real-time collision, and realizing compliant grabbing by binding force feedback to finish the accurate sorting of the special-shaped oversized workpiece.

Inventors

  • ZHANG YUJIN
  • MA XINLEI
  • XU JIARUI
  • LI XINHENG
  • ZENG XIAOHAN
  • YE SHAOXIANG
  • LI SHIYUE
  • ZHANG MINGLIANG
  • LUO WEICHENG
  • ZHONG QIUSHENG

Assignees

  • 广州职业技术大学

Dates

Publication Date
20260505
Application Date
20260119

Claims (9)

  1. 1. Oversized target reconstruction and robotic AI sorting system based on multi-mesh 3D point cloud fusion, characterized by being configured to: the multi-view 3D camera array module is annularly arranged around the target and is used for synchronously acquiring multi-view depth images and generating local point clouds; The point cloud fusion processing module is used for carrying out space-time alignment, weighted fusion and noise filtering on the local point cloud to generate a global high-density point cloud; the three-dimensional reconstruction module is used for carrying out layered reconstruction and detail enhancement on the oversized target based on the high-density point cloud and outputting a complete three-dimensional model; the AI positioning and sorting decision module is used for carrying out target detection, pose estimation and semantic guidance grabbing point prediction on the three-dimensional model; And the robot execution module is used for carrying out path planning and self-adaptive grabbing control according to the grabbing point and pose information.
  2. 2. The oversized target reconstruction and robotic AI sorting system based on multi-mesh 3D point cloud fusion of claim 1 wherein the point cloud fusion processing module comprises: The system comprises a space-time synchronization unit, a self-adaptive weighting fusion unit, a noise filtering unit and a noise filtering unit, wherein the space-time synchronization unit is used for realizing time stamp alignment of multi-sensor data by adopting a hardware trigger signal and a PTP protocol, the self-adaptive weighting fusion unit is used for dynamically adjusting fusion weight according to local density and confidence of each sensor point cloud, and the noise filtering unit is used for eliminating dynamic interference point clouds in real time based on space track fluctuation indexes and texture change rates.
  3. 3. The oversized target reconstruction and robotic AI-sorting system based on multi-mesh 3D point cloud fusion of claim 1 wherein the three-dimensional reconstruction module employs a hierarchical reconstruction strategy comprising: The device comprises a bottom layer reconstruction unit, a detail enhancement unit, a local-global optimization unit and a characteristic matching unit, wherein the bottom layer reconstruction unit is used for generating a basic grid model by using a poisson reconstruction or voxelization method, the detail enhancement unit is used for carrying out texture refinement and cavity repair by using a nerve radiation field NeRF, and the local-global optimization unit is used for dividing a target into a plurality of sub-modules for parallel reconstruction and then carrying out splicing by using characteristic matching.
  4. 4. The multi-mesh 3D point cloud fusion-based oversized target reconstruction and robotic AI-sorting system of claim 1, wherein the AI-positioning and sorting decision module comprises: The multi-task 3D detection network is based on a sparse convolution architecture, outputs a 3D bounding box and a grabbing point thermodynamic diagram of a target, a semantic guiding unit supports text or voice instruction input, dynamically adjusts sorting priority and strategy, and a real-time pose correction unit adopts a cross-mode attention mechanism to conduct sub-centimeter level correction on an initial pose.
  5. 5. The multi-mesh 3D point cloud fusion-based oversized target reconstruction and robotic AI sorting system of claim 1, wherein the robotic execution module comprises: collision-aware path planning unit based on Euclidean Symbol Distance Field (ESDF) and RRT Generating a collision-free track by an algorithm; the dynamic re-planning unit updates the motion path in real time when the environment changes or the target moves.
  6. 6. The oversized target reconstruction and robotic AI-sorting system based on multi-mesh 3D point cloud fusion of claim 1 wherein the point cloud fusion step comprises: Coarse registration by adopting Super4PCS algorithm, fine registration by combining ICP algorithm and normal vector constraint, and dynamic adjustment of fusion weight of RGB-D point cloud and laser point cloud based on local density of point cloud.
  7. 7. The multi-mesh 3D point cloud fusion-based oversized target reconstruction and robotic AI sorting system of claim 1, wherein the three-dimensional reconstruction step comprises: The method comprises the steps of performing component level segmentation on a target to generate a plurality of sub-modules, performing poisson reconstruction on each sub-module in parallel, and splicing the sub-modules into a complete model through feature matching and curved surface optimization.
  8. 8. The ultra-large target reconstruction and robot AI sorting system based on multi-mesh 3D point cloud fusion of claim 1, wherein the AI positioning step comprises the steps of adopting a hierarchical positioning architecture, performing coarse positioning through point cloud registration, performing fine positioning through a cross-modal attention mechanism, predicting sensor drift by combining an LSTM network, and adapting to vibration and temperature change environments.
  9. 9. The multi-mesh 3D point cloud fusion-based oversized target reconstruction and robotic AI sorting system of claim 1, further comprising: The three-dimensional (3D) camera comprises a substrate, a plurality of cones distributed on the substrate in an array mode, wherein the cones are in a regular grid layout, the distance between every two adjacent cones in the horizontal direction is 200 mm+/-1 mm, and the ratio of the height of each cone to the bottom diameter of each cone is a preset optimal value, so that the 3D camera can stably identify the vertex characteristics of each cone.

Description

Oversized target reconstruction and robot AI sorting system based on multi-view 3D point cloud fusion Technical Field The invention relates to the technical field of intersection of industrial automation and three-dimensional machine vision, in particular to an oversized target reconstruction and robot AI sorting system based on multi-mesh 3D point cloud fusion. Background The technical field of three-dimensional machine vision and industrial automation comprises the related technology of acquiring three-dimensional information of an object by utilizing an optical sensor and guiding a robot to complete tasks such as grabbing and assembling, and the core content of the field relates to recovering a three-dimensional geometric structure and a spatial pose of a target object from two-dimensional image or depth data, and converting the information into a motion control instruction executable by the robot. Wherein the ultra-large target reconstruction and robot AI sorting system based on multi-mesh 3D point cloud fusion refers to a technical scheme for carrying out automatic operation on irregular entity objects with the size exceeding one meter, the technical matters aimed at by the subject cover the synchronicity and the integrity of multi-view three-dimensional data acquisition, the rapid registration and fusion of massive point cloud data, the efficient construction and detail recovery of an ultra-large three-dimensional model, the robust estimation of the target precise pose under a complex scene and the generation of a robot safe collision-free operation track, the method comprises the steps of arranging an annular sensor array, adopting hardware triggering to realize microsecond synchronous acquisition, dynamically calculating weights according to the local density of point cloud and the distance measurement confidence of the sensor to realize multi-source data fusion, dividing a target into sub-components, carrying out parallel grid reconstruction, carrying out feature matching and splicing, calculating the position and the gesture of the target by combining a two-stage process of geometric rough registration and visual feature fine correction, and calculating the movement path of the mechanical arm in a known environment map by utilizing random sampling search and real-time collision detection. However, the traditional single-binocular system has limited fixed visual angles, and a shielding blind area exists when an oversized target is observed, so that the acquired point cloud data is incomplete, and a cavity is generated by reconstructing a model. The existing point cloud fusion mostly adopts static parameters, is difficult to adapt to data quality fluctuation caused by illumination change and surface reflection, is easy to cause fusion distortion or noise residue, and is difficult to meet the production line beat due to processing delay aiming at the fact that the calculation load of the whole one-time reconstruction method increases along with the size cube aiming at an oversized target. Most positioning methods rely on a single geometric or image modality, and are not effectively complementary, and pose estimation accuracy is limited. The system often ignores slow drift in long-term operation of the sensor, lacks on-line compensation, resulting in degradation of accuracy over time. Robot path planning is often based on a static environment model, cannot respond to a dynamic scene, and the grabbing strategy is also lack of consideration for object deformation and stress feedback. Disclosure of Invention The invention mainly aims to provide an oversized target reconstruction and robot AI sorting system based on multi-view 3D point cloud fusion, which can effectively solve the problems of the background technology. In order to achieve the above purpose, the technical scheme adopted by the invention is as follows: Oversized target reconstruction and robotic AI sorting system based on multi-mesh 3D point cloud fusion configured to: the multi-view 3D camera array module is annularly arranged around the target and is used for synchronously acquiring multi-view depth images and generating local point clouds; The point cloud fusion processing module is used for carrying out space-time alignment, weighted fusion and noise filtering on the local point cloud to generate a global high-density point cloud; the three-dimensional reconstruction module is used for carrying out layered reconstruction and detail enhancement on the oversized target based on the high-density point cloud and outputting a complete three-dimensional model; the AI positioning and sorting decision module is used for carrying out target detection, pose estimation and semantic guidance grabbing point prediction on the three-dimensional model; And the robot execution module is used for carrying out path planning and self-adaptive grabbing control according to the grabbing point and pose information. Preferably, the point cloud fusion processin