Search

CN-121544712-B - Automatic component positioning method based on vision

CN121544712BCN 121544712 BCN121544712 BCN 121544712BCN-121544712-B

Abstract

The invention discloses a vision-based component automatic positioning method which comprises the steps of completing pasting of checkerboard feature points at a ball and a ball socket, completing ball center fitting and ball socket checkerboard measurement of the inner surface of the ball socket, shooting and storing three-dimensional data of the feature points of the ball and the ball socket in the position, constructing a pose transformation matrix from a camera to a laser tracker assembly coordinate system and from the laser tracker assembly coordinate system to a positioner, shooting the ball and the ball socket in the position by the camera and storing three-dimensional data of the feature points in the position, constructing a pose transformation matrix from the ball and the ball socket in the position, completing mapping of the feature points of the ball and the ball centers of the inner surface of the ball socket in the position, obtaining relative distances under the camera coordinate system, and solving the moving distance of the positioner through the pose transformation matrix from the camera to the laser tracker assembly coordinate system to the positioner. The method has the advantages of allowing the pose of the ball part to change and the pose of the camera to change when shooting, and realizing automatic and rapid positioning of the aircraft part.

Inventors

  • WANG LUYAO
  • ZHU GUORONG
  • WANG ZHIJUN
  • CHEN LINGKAI
  • WEI YIXIONG
  • FU YUN
  • ZHOU PING
  • HOU SHANG
  • GE HAIWEN

Assignees

  • 之江实验室

Dates

Publication Date
20260508
Application Date
20260119

Claims (10)

  1. 1. A vision-based component automated docking method, comprising: The method comprises the steps of respectively installing a checkerboard at a ball socket of a ball head, measuring angular point data in the checkerboard at the ball socket to be positioned through a laser tracker, collecting inner spherical surface data of the ball socket and fitting spherical center coordinates, moving a positioner to enable the ball socket to be matched with the ball head, measuring characteristic point data of the checkerboard at the ball socket of the ball head, shooting at a preset position through a camera, obtaining three-dimensional data of the angular point in the checkerboard at the ball socket of the ball head, calculating a first pose transformation matrix of the camera to a laser tracker, fixing a target ball of the laser tracker at the positioner, moving each axis of the target ball of the laser tracker, recording data of the positioner and the laser tracker, and constructing a second pose transformation matrix of the laser tracker to the positioner from the assembly coordinate system; The method comprises the steps of shooting a ball socket image of a ball to be positioned through a camera, extracting three-dimensional data of characteristic points of a checkerboard at the ball socket of the ball to be positioned, constructing a third pose transformation matrix of ball gestures by using the three-dimensional data of the characteristic points of the checkerboard at the ball socket to be positioned and the three-dimensional data of angular points in the checkerboard at the ball socket to be positioned, applying the third pose transformation matrix to the three-dimensional data of the characteristic points of the checkerboard at the ball socket to be positioned to obtain mapped data, constructing a fourth pose transformation matrix and a fifth pose transformation matrix based on the angular point data of the checkerboard at the ball socket to be positioned, the characteristic points of the checkerboard at the ball socket to be positioned and the mapped data, respectively applying the fourth pose transformation matrix and the fifth pose transformation matrix to spherical center coordinates to calculate relative displacement, multiplying the relative displacement with the first pose transformation matrix and the second pose transformation matrix in sequence to obtain a movement command value, and sending the movement command value to a positioning device to complete ball socket access.
  2. 2. The vision-based component automatic positioning method according to claim 1, wherein a checkerboard is installed on the same side of the ball head and the ball socket, and the inner corners of part or all of the checkerboard are selected as characteristic points.
  3. 3. The vision-based component automated placement method of claim 1, wherein the coordinates of the center of the ball of the inner surface of the ball socket are fitted using a least squares method.
  4. 4. The automated vision-based component placement method of claim 1, wherein capturing a picture of the corners of the checkerboard at the ball and socket of the placed ball with a camera at a predetermined location, comprises: The method comprises the steps of controlling a camera to reach a preset position through a console and photographing to obtain two-dimensional RGB image data and depth image data of the ball socket in position, detecting the two-dimensional RGB image by using openCV library functions to obtain two-dimensional floating point data of internal corners of a checkerboard at the ball socket in position, masking the area of the checkerboard at the ball socket in the two-dimensional RGB image to obtain two-dimensional floating point data of the internal corners of the checkerboard at the ball socket in position, and obtaining three-dimensional data of the internal corners of the checkerboard at the ball socket in position through a two-dimensional linear interpolation method by combining pixel points and depth image data of the two-dimensional RGB image.
  5. 5. The vision-based component automation localization method of claim 1, wherein the computing a first camera-to-laser tracker pose transformation matrix comprises: Performing decentering treatment on three-dimensional data of angular points in a checkerboard at the position of the inserted ball socket and checkerboard characteristic point data at the position of the inserted ball socket, constructing a covariance matrix, performing singular value decomposition on the covariance matrix, and calculating a first pose transformation matrix by combining the decomposed left singular vector matrix and right singular vector matrix, wherein the pose transformation matrix comprises a rotation matrix and a translation matrix.
  6. 6. The vision-based component automation positioning method of claim 1, wherein a target ball of a laser tracker is mounted on a positioner, each axis of the positioner is moved in sequence by a console, three-coordinate data of the positioner and measurement data of the laser tracker are recorded respectively, and a second pose transformation matrix of a laser tracker assembly coordinate system to the positioner is constructed.
  7. 7. The vision-based component automation positioning method of claim 1, wherein constructing fourth and fifth pose transformation matrices based on the to-be-positioned ball socket intra-tessellation corner data, the positioned ball socket tessellation feature point data, and the mapped data, and applying the fourth and fifth pose transformation matrices to the center coordinates to calculate the relative displacement, respectively, comprises: and respectively constructing a fourth pose transformation matrix and a fifth pose transformation matrix based on the angular point data in the checkerboard at the ball socket, the feature point data of the checkerboard at the ball socket which is already positioned and the three-dimensional data of the feature point of the checkerboard at the mapped ball socket, respectively mapping the coordinates of the ball center by using the fourth pose transformation matrix and the fifth pose transformation matrix, and calculating the difference value of the fourth pose transformation matrix and the fifth pose transformation matrix to obtain the relative displacement.
  8. 8. A vision-based component automation indexing device comprising one or more processors configured to implement the vision-based component automation indexing method of any one of claims 1-7.
  9. 9. An electronic device comprising a memory and a processor, wherein the memory is coupled to the processor, wherein the memory is configured to store program data and wherein the processor is configured to execute the program data to implement the vision-based component automation approach of any one of claims 1-7.
  10. 10. A computer readable storage medium, on which a computer program is stored, characterized in that the program, when executed by a processor, implements the vision-based component automation docking method according to any one of claims 1-7.

Description

Automatic component positioning method based on vision Technical Field The invention relates to the technical field of aircraft component assembly, in particular to a vision-based component automatic positioning method. Background In digital assembly of aircraft parts, the parts are typically transported to a station by an automated guided vehicle (Automated Guided Vehicle, AGV) or a crane, then an operator manually controls an operator panel to sequentially move a plurality of positioners to engage/position the ball sockets on the positioners with the ball heads on the parts, and then the control panel controls each positioner to lift the parts and withdraw the AGV or crane. In the process, the situation that an operator manually moves a plurality of positioners and visually observes the ball head and ball socket placement situation and repeatedly corrects is considered, the process consumes manpower and time, influences the working process, and has a large number of uncontrollable factors. According to the invention, the working condition that the aircraft component enters the station is combined, the pose change after the component enters the station is comprehensively considered, the pose change in the camera installation photographing process is considered, the visual scheme is adopted to capture the change of the relevant pose, the relative distance between the ball head and the ball socket is calculated, the positioning device is automatically controlled to complete the positioning work, the labor is saved, the time is shortened, and the operation process is simplified. Disclosure of Invention The invention aims to overcome the defects of the prior art and provides a vision-based component automation positioning method. In order to achieve the above object, the present invention provides a vision-based component automatic positioning method, comprising: the ball head and the ball socket are respectively provided with a checkerboard; the method comprises the steps of measuring the angular point data in a checkerboard at a ball socket by a laser tracker, collecting the inner spherical surface data of the ball socket and fitting the spherical center coordinates, moving a positioner to enable the ball socket and the ball head to be matched in place, measuring the characteristic point data of the checkerboard at the ball socket at the ball position by the laser tracker, photographing at a preset position by a camera, obtaining the three-dimensional coordinate value of the angular point in the checkerboard at the ball socket at the ball position, calculating a first pose transformation matrix of the assembly coordinate system of the camera to the laser tracker, fixing a target ball of the laser tracker on the positioner, moving each axis of the target ball of the laser tracker, recording the data of the positioner and the laser tracker, and constructing a second pose transformation matrix of the assembly coordinate system of the laser tracker to the positioner; The method comprises the steps of shooting a to-be-positioned ball socket photo through a camera, extracting three-dimensional data of a checkerboard feature point at the to-be-positioned ball socket, constructing a third pose transformation matrix of ball gestures by using to-be-positioned and in-position ball socket data, applying the three-dimensional data of the checkerboard feature point at the to-be-positioned ball socket to the third pose transformation matrix to obtain mapped data, constructing a fourth pose transformation matrix and a fifth pose transformation matrix based on corner data in the checkerboard at the ball socket, the checkerboard feature point data at the in-position ball socket and the mapped data, respectively applying the fourth pose transformation matrix and the fifth pose transformation matrix to ball center coordinates to calculate relative displacement, sequentially multiplying the relative displacement with the first pose transformation matrix and the second pose transformation matrix to obtain a movement command value, and sending the movement command value to a positioner to finish the access position of the ball socket. Furthermore, the same side surfaces of the ball head and the ball socket are provided with the checkerboard, and the inner angle points of part or all of the checkerboard are selected as characteristic points. Further, the coordinates of the center of the ball on the inner surface of the ball socket are fitted by a least square method. Further, the photographing at a predetermined position by using a camera to obtain three-dimensional coordinate values of corner points in a checkerboard at the ball socket of the ball head, including: The method comprises the steps of controlling a camera to reach a preset position through a console and photographing to obtain two-dimensional RGB image data and depth image data of the ball socket in place, detecting the two-dimensional RGB image by using openCV librar