CN-116740141-B - Machine vision-based weld joint positioning system and method for small preceding assembly
Abstract
The invention discloses a weld positioning system and method for a preceding small assembly based on machine vision, and the weld positioning system comprises the steps of hoisting a camera above a welding platform through a tool, fixing the relative position, enabling the center line of the camera to be perpendicular to the plane of a workpiece to be measured, calibrating internal parameters of the camera, calibrating hand and eye between the camera and the welding robot or calibrating hand and eye between the line laser sensor and the welding robot, feeding the workpiece, shooting scene pictures, calculating a workpiece feature point set by adopting gray segmentation and Harris corner detection based on the shot pictures, performing image splicing based on a calibration plate guide diagram before calculating the workpiece feature point set for a scene needing shooting by a plurality of cameras, locating the feature point set, calculating weld end points, tracking the weld, and correcting the weld data in real time to obtain actual weld data. The invention can effectively improve the welding efficiency and the automation degree of the small assembly units of the shipyard.
Inventors
- FANG ZICHEN
- MA TAO
- WANG KANGJIE
- LI QIMING
- CHEN WEIBIN
- ZHANG BENSHUN
- ZHANG LELE
- FENG YUSHENG
Assignees
- 中船重工信息科技有限公司
- 中国船舶集团有限公司第七一六研究所
- 江苏杰瑞科技集团有限责任公司
Dates
- Publication Date
- 20260508
- Application Date
- 20230619
Claims (8)
- 1. The machine vision-based positioning method for the weld joint of the preceding small assembly is characterized by comprising the following steps of: The camera is hoisted above the welding platform through the tool, the position is fixed, and the center line of the camera is vertical to the plane of the workpiece to be measured; performing camera internal parameter calibration, and performing camera-welding robot hand-eye calibration or line laser sensor-welding robot hand-eye calibration; feeding a workpiece, and shooting a scene picture; Performing workpiece feature point set calculation by adopting gray segmentation and Harris corner detection based on the shot pictures, and performing image splicing based on a calibration plate instruction chart before the workpiece feature point set calculation for a scene needing to be shot by a plurality of cameras; Locating the feature point set and calculating a weld joint endpoint; performing weld tracking, and correcting weld data in real time to obtain actual weld data; the workpiece is a rectangular bottom, and the characteristic points are selected as two points on two short edges of the bottom plate and one point on the toggle plate; the image stitching based on the calibration plate instruction chart specifically comprises the following steps: Step 1, manufacturing a double calibration plate, and fixing two identical calibration plates on the same long plate, wherein the central lines of the two calibration plates are on the same straight line, and the center relative distance is And measuring the height difference of the calibration surface of the double calibration plates relative to the platform to be measured Pixel accuracy in world coordinate system ; Step 2, placing the double calibration plates under two cameras, wherein the two calibration plates are respectively positioned in the visual field of the two cameras and respectively marked as a left camera C1 and a right camera C2, respectively acquiring and obtaining guide images ImgGuide and ImgGuide2 and images Img1 and Img2 to be spliced, and measuring the overlapping proportion of the double guide images in the column direction Staggered ratio in the row direction Based on the calibrated camera internal parameters, the centers of the calibration plates in the two diagrams of the guidance diagrams ImgGuide and ImgGuide are taken as the origin of coordinates to respectively obtain the corresponding external parameters of the left camera C1 and the right camera C2 And , Wherein, the The translation is indicated as such, The rotation is indicated by the expression of a rotation, Indicating the rotation direction, taking 1 or 0; step 3, coordinate conversion is carried out based on And Determination of new external parameters And According to And And respectively splicing the to-be-spliced images Img1 and Img2 in the row direction to obtain a spliced result image ImgRes.
- 2. The machine vision-based positioning method of a weld joint of a preceding sub-assembly of claim 1, wherein a nine-point calibration method is used for camera-welding robot hand-eye calibration, and a TCP calibration method is used for line laser sensor-welding robot hand-eye calibration.
- 3. The method for positioning a weld of a preceding sub-assembly based on machine vision according to claim 1, wherein the step 3 comprises the following steps: Calculating the projection transformation matrix of the respective pixel coordinate system-world coordinate system of the two cameras Wherein: Wherein, the As an internal reference matrix of the camera, Is an external parameter of the camera; the conversion of pixel coordinates into the world coordinate system is: Wherein, the For the depth of the camera, For the pixel coordinates, The world coordinates are the column coordinates and the row coordinates of the pixel points; According to ImgGuide1 Clipping the redundant lines to determine the upper left vertex and 0.5 × Determining lower right vertex, determining world coordinates of the new upper left vertex and lower right vertex as And Defining the new external parameters of C1 as: wherein RotNew is a new rotation amount of C1, transNew is a new translation amount of C1, and DirectionNew is a new direction of C1; For ImgGuide2, a matrix is defined: For a pair of Inversion is performed to obtain Will be Converted into 4*4 arrays : Will be And Multiplication to obtain And then will And Multiplication to obtain Finally, will Conversion to 1*3 arrays 。
- 4. The method for positioning a weld joint of a preceding small assembly based on machine vision as set forth in claim 3, wherein the workpiece feature point set calculation based on the photographed image by using gray segmentation and Harris corner detection is specifically as follows: the method comprises the steps of 1, accurately extracting a bottom plate area of a toggle plate, firstly, carrying out median filtering and graphic opening and closing operation on a picture to be processed to filter noise and meet a set tiny area, and then adopting threshold segmentation to extract the bottom plate area of the toggle plate; Step 2, carrying out Harris corner detection on the outline outside the bottom plate area of the toggle piece, calculating a corresponding workpiece feature point set VectorF, solving the slopes of every two adjacent points in a corner set VectorC obtained by carrying out Harris corner detection on the outline, further obtaining a slope set VectorK, determining a screening threshold value, and removing pinch points smaller than the threshold value in VectorK in VectorC to obtain a set VectorF; And 3, determining the positions of the characteristic points according to the shape of the workpiece bottom plate based on the set VectorF, and solving a characteristic point set.
- 5. The weld positioning system of the preceding sub-assembly based on machine vision is characterized by comprising a camera unit, a welding robot unit, an information processing unit and a welding platform, wherein the camera unit comprises a camera and an illumination light source and is used for shooting pictures of workpieces in a scene, the welding robot unit comprises a welding robot and a line laser sensor and is used for fine positioning of the weld, the information processing unit is respectively connected with the camera and a light source controller and is used for controlling the light source through the light source controller, controlling the camera to shoot pictures of the workpieces in the scene, calculating characteristic point sets of the workpieces, locating the characteristic point sets and calculating weld end points, and transmitting characteristic point set data to the welding robot unit, and the welding platform is used for placing toggle plate pieces to be welded.
- 6. The machine vision-based weld positioning system for the preceding sub-assembly, as set forth in claim 5, wherein the camera is a CCD/CMOS camera, is suspended above the welding platform by a fixture, and has a center line perpendicular to the upper surface of the welding platform, and the workpiece to be welded is positioned in a toggle plate upward position on the welding platform.
- 7. The machine vision based advanced small assembly weld positioning system as set forth in claim 5, wherein said plurality of cameras are fixed in relative position with respect to each other.
- 8. The machine vision-based advanced small assembly weld positioning system of claim 5, wherein said information processing unit is an industrial personal computer.
Description
Machine vision-based weld joint positioning system and method for small preceding assembly Technical Field The invention belongs to the technical field of machine vision, and particularly relates to a weld joint positioning system and method for a preceding small assembly based on machine vision. Background Leading minor assemblage is a stage in the manufacture of ship segments and generally refers to the processing of the simplest hull structural members of a ship at a fixed site, such as the mounting of reinforcing bars on assembled T-profiles, ribs, etc. At present, the domestic ship manufacturing enterprises, the prior small-assembly welding is mainly performed manually, the degree of automation is not high, intelligent line production is not formed in most cases, and the waste of human resources and energy sources is serious. Under the situation, the popularization and application of the ship hull advanced small-assembly welding robot are quickened, and the realization of the intelligent manufacturing of advanced small-assembly is urgent. Welding is an important link in the assembly of the ship body and is also a key factor for determining the quality of the ship body. The various processes and stations in the manufacture of ships are realized by means of welding. According to the related data, the welding workload occupies three to four times of the total engineering amount of the ship body construction, the welding cost occupies three to five times of the total ship body construction cost, and the working hour required for welding also occupies four times of the total ship body construction working hour. At present, the welding of ships in China is basically performed in a manual welding mode mainly by workers, and the degree of automation is only two to three. The invention discloses a method and a system for positioning a welding seam based on a vision guiding robot, and Chinese patent publication No. CN 113369761A, which is characterized by comprising the steps of obtaining image information of a workpiece to be welded, determining the starting position of the welding seam of the workpiece to be welded according to the image information, guiding the welding seam robot to weld according to the determined starting position of the welding seam, wherein the patent is a manual-visual integrated detection scheme, the workpiece position information is required to be input in advance, and the working time is long. Disclosure of Invention The invention aims to provide a weld positioning system and method for a preceding small assembly based on machine vision, which can effectively improve the welding efficiency and automation degree of a small assembly unit of a shipyard, and has high robustness and anti-interference capability on external environment light. The technical scheme for achieving the purpose of the invention is that the weld joint positioning method of the preceding small assembly based on machine vision comprises the following steps: the camera is hoisted above the welding platform through the tool, the relative position is fixed, and the center line of the camera is vertical to the plane of the workpiece to be measured; performing camera internal parameter calibration, and performing camera-welding robot hand-eye calibration or line laser sensor-welding robot hand-eye calibration; feeding a workpiece, and shooting a scene picture; Performing workpiece feature point set calculation by adopting gray segmentation and Harris corner detection based on the shot pictures, and performing image splicing based on a calibration plate instruction chart before the workpiece feature point set calculation for a scene needing to be shot by a plurality of cameras; Locating the feature point set and calculating a weld joint endpoint; And (5) performing weld tracking, and correcting weld data in real time to obtain actual weld data. Further, a nine-point calibration method is adopted for the camera-welding robot hand-eye calibration, and a TCP calibration method is adopted for the line laser sensor-welding robot hand-eye calibration. Further, the workpiece is a rectangular bottom, and the selected characteristic points are two points on two short edges of the bottom plate and one point on the toggle plate. Further, the image stitching based on the calibration plate guidance chart specifically includes: Step 1, manufacturing a double calibration plate, and fixing two identical calibration plates on the same long plate, wherein the central lines of the two calibration plates are on the same straight line, and the center relative distance is And measuring the height difference of the calibration surface of the double calibration plates relative to the platform to be measuredPixel accuracy in world coordinate system; Step 2, placing the double calibration plates under two cameras, wherein the two calibration plates are respectively positioned in the visual field of the two cameras and respectively recorded as a le