CN-122023509-A - Weld feature point extraction method based on structured light camera
Abstract
The invention discloses a welding seam characteristic point extraction method based on a structured light camera, and belongs to the technical field of industrial robot automation and machine vision. The method comprises the steps of obtaining a space pose conversion matrix between an industrial robot base coordinate system, an end effector coordinate system and a structured light camera coordinate system through hand-eye calibration, achieving unified coordinates of the robot and the camera, controlling the industrial robot to adjust the pose of the end effector according to a preset path, collecting three-dimensional point cloud data on the surface of a workpiece, preprocessing the point cloud data, carrying out plane fitting and clustering segmentation by using a random sampling consistency method, obtaining a main geometric plane of the workpiece, solving intersection points and intersection lines, extracting candidate weld feature points by combining a distance threshold value and local point cloud features, determining a weld feature starting point and a weld feature end point by using a projection method, and finally converting the weld feature point coordinates to the base coordinate system by using the space pose conversion matrix to obtain complete weld feature point position information. The system comprises an industrial robot, a structured light camera arranged on an end effector and an upper computer, and can realize high-precision and automatic extraction of characteristic points of the welding seam and ensure the accuracy and reliability of the positioning of the welding seam.
Inventors
- PAN HAIHONG
- ZHANG WEI
- CHEN LIN
- LI LULU
Assignees
- 广西大学
Dates
- Publication Date
- 20260512
- Application Date
- 20250829
Claims (7)
- 1. The method for extracting the characteristic points of the welding seam based on the structured light camera is characterized by at least comprising the following steps: Step 1, a space pose conversion matrix (T) among an industrial robot base coordinate system, an end effector coordinate system and a surface structure light camera coordinate system is respectively obtained through a hand-eye calibration method, so that a coordinate system of a robot and a camera is unified, and a foundation is provided for accurate positioning of weld characteristic points; Step 2, controlling the industrial robot to adjust the gesture of the end effector according to a preset camera photographing path, so that the surface structure light camera is positioned at a proper observation angle and a proper working distance, and the visual field range of the camera is ensured to cover the surface of a workpiece; Step 3, the upper computer extracts characteristic points of the collected point cloud data, and the method comprises the following substeps: Step 3.1, the point cloud data preprocessing stage comprises the steps of removing noise and abnormal points through statistics of outlier filtering, improving the continuity of the point cloud data, cleaning discrete points through neighborhood density methods such as radius filtering and the like, highlighting the surface structural characteristics of a workpiece, and finally, combining self-adaptive downsampling, keeping main geometric characteristics while reducing the density of the point cloud, and improving the subsequent processing efficiency; step 3.2, in the geometric model fitting stage, plane fitting and clustering segmentation are carried out on the workpiece point cloud based on a random sampling consistency (RANSAC) method, three main geometric planes of the workpiece surface are extracted, and mathematical equations of all planes are solved respectively; Step 3.3, determining a weld feature starting point, namely carrying out joint solution on the three plane equations obtained in the step 3.2 to obtain a three-plane intersection point, defining the intersection point as the weld feature starting point, and simultaneously calculating three-dimensional coordinates of the feature starting point under a plane structure light camera coordinate system; Calculating the intersection equation between the three planes in the step 3.2 to obtain intersection lines between the planes, taking the intersection lines as target welding lines to obtain geometric description of the welding lines of the workpiece, and further determining three-dimensional coordinate information of the target welding lines under a camera coordinate system; Step 3.5, determining candidate weld characteristic points, namely setting a distance threshold value theta near the intersection line, extracting local point clouds meeting the condition, and constructing a local point cloud bounding box; calculating a main characteristic value of the constructed local point cloud in the X, Y, Z direction, analyzing the change rule of the characteristic value, and further identifying a region with obvious curvature change in the workpiece point cloud, and taking the region as a candidate weld characteristic point; Step 3.6, determining a weld characteristic end point, namely screening out a point farthest from the intersection point of the three planes from candidate weld characteristic points by adopting a projection method based on distance, determining the point as the weld characteristic end point, and calculating three-dimensional coordinates of the weld characteristic end point under a structured light camera coordinate system, thereby completing complete extraction of a weld characteristic start point and a weld characteristic end point; Step 4, calculating three-dimensional position information of the weld joint characteristic points according to the extracted weld joint characteristic point information and combining with a space pose conversion matrix (T), namely converting the extracted weld joint characteristic points into a base coordinate system to realize coordinate conversion of the weld joint characteristic points; And 5, repeating the steps 2 to 4 until the industrial robot finishes the extraction of the characteristic points of the welding seam of the whole workpiece, and obtaining a complete position information set of the characteristic points of the welding seam.
- 2. The method for extracting feature points of a weld joint based on a structured light camera according to claim 1, wherein in the step 2, a path for photographing by a preset camera is pre-planned according to the shape and the size of the workpiece, so as to ensure that the working range of the structured light camera can cover the surface of the workpiece entirely and collect complete three-dimensional point cloud data.
- 3. The method for extracting weld feature points based on a structured light camera according to claim 1, wherein in the step 3.2, when plane fitting is performed based on a random sampling consistency (RANSAC) method, the number of iterations is optimized by adopting adaptive iteration to improve the accuracy and stability of the plane fitting.
- 4. The method for extracting feature points of a weld based on a structured light camera according to claim 1, wherein in step 3.5, the size of the local point cloud bounding box is dynamically determined according to the geometric feature of the weld and a preset distance threshold value, so as to ensure that candidate feature points of the weld can be accurately extracted.
- 5. The method for extracting feature points of a weld by a structured light camera according to claim 1, wherein in the step 3.6, the distance-based projection method is performed by calculating euclidean distances between candidate feature points and intersection points of three planes, and selecting the point with the farthest distance as the feature end point of the weld.
- 6. The method for extracting the weld feature points based on the structured light camera according to claim 1, wherein in the step 4, the calculation accuracy of the spatial pose conversion matrix (T) directly affects the positioning accuracy of the weld feature points in the base coordinate system, and by adopting a high-accuracy hand-eye calibration algorithm and an optimization method, the calculation error of the spatial pose conversion matrix (T) is ensured to be within an allowable range.
- 7. The method for extracting the characteristic points of the welding seam based on the structured light camera according to claim 1, which is characterized by at least comprising an industrial robot, a surface structured light camera and an upper computer; The industrial robot is used for adjusting the gesture of the end effector according to a preset path; The surface structure light camera is arranged on an end effector of the industrial robot and is used for collecting three-dimensional point cloud data of the surface of a workpiece; The upper computer is used for receiving point cloud data transmitted by the structured light camera, executing the weld characteristic point extraction method according to any one of claims 1 to 6 and obtaining the spatial position information of the weld characteristic points.
Description
Weld feature point extraction method based on structured light camera Technical Field The invention belongs to the technical field of industrial robot automation and machine vision, and particularly relates to a welding seam characteristic point extraction method based on a structured light camera, which is used for accurately identifying and positioning a welding seam path. According to the method, the point cloud information of the surface of the workpiece is obtained through the three-dimensional structured light camera, and characteristic points of welding seams are extracted through point cloud data processing, so that reliable input data are provided for automatic welding and processing path planning of the industrial robot. Background In modern manufacturing, especially in the field of welding automation, industrial robots have been widely used for welding tasks with high repeatability, high risk and high precision requirements, in order to improve production efficiency, guarantee welding quality and reduce manual labor intensity. The core of achieving high quality automated welding is the precise path planning of the robot, which relies first of all on the accurate identification and localization of weld feature points. At present, main modes of acquiring weld characteristic points comprise off-line programming, manual teaching and a method based on two-dimensional vision and three-dimensional vision. In recent years, three-dimensional structured light camera technology has been increasingly used in the field of welding because it is capable of reconstructing a three-dimensional point cloud on a workpiece surface by projecting a grating and analyzing phase information, providing more abundant three-dimensional spatial information than two-dimensional vision. However, the existing application scheme based on the three-dimensional structured light still has a plurality of limitations that firstly, part of the method is too general for processing point cloud data, lacks an extraction algorithm specially designed for geometric features (such as intersecting lines and angular points) of welding seams, and is difficult to directly and stably output feature points required by path planning. Secondly, the method is extremely easy to be interfered by field environment, and noise in point cloud, uneven distribution of point density, change of reflectivity of the surface of a workpiece and other factors can cause misidentification of characteristic points or reduction of positioning accuracy. Therefore, an efficient and anti-interference method for extracting weld characteristic points is urgently needed in the field, so that the technical problem of automatically extracting the weld characteristic points from three-dimensional point clouds with high accuracy and high robustness in a complex and severe industrial field environment is solved. Disclosure of Invention In order to solve the problems in the background art, and improve the accuracy and the robustness of the extraction of the characteristic points of the welding line under the complex point cloud environment, the invention provides a method for extracting the characteristic points of the welding line based on a structured light camera. A weld characteristic point extraction method based on a structured light camera is characterized in that the acquisition and the processing of weld characteristic point information are realized by adopting a three-dimensional structured light camera, and the method at least comprises the following steps: and 1, unifying a coordinate system, namely calculating a space pose conversion matrix (T) among an industrial robot base coordinate system, an end effector coordinate system and a surface structure light camera coordinate system by a hand-eye calibration method, and establishing a mapping relation between the surface structure light camera coordinate system and the industrial robot coordinate system. And 2, collecting point clouds, namely adjusting the gesture of an end effector by controlling the industrial robot according to a preset camera photographing path, so that a surface structured light camera arranged at the tail end of the industrial robot can be kept at a proper observation angle and a proper working distance, and the visual field range of the camera can be ensured to cover the surface of a workpiece. The surface structure light camera continuously collects three-dimensional point cloud data of the surface of the workpiece in the movement process of the robot and transmits the three-dimensional point cloud data to the upper computer in real time through the data interface so as to ensure the integrity and instantaneity of the data. The point cloud data contains geometric form information of the surface of the workpiece, and a rich data source is provided for the subsequent extraction of the characteristic points of the welding line. And 3, extracting characteristic points of the welding seams. After the