CN-116503478-B - Visual positioning control method based on label pattern
Abstract
The invention discloses a visual positioning control method based on a tag pattern, which comprises the steps of A, preprocessing an image acquired by a camera of a robot, searching out the graph attribute and the vertex of the tag pattern in the preprocessed image, B, calculating the distance between the tag pattern and the camera and the deflection angle of the tag pattern relative to the camera by utilizing a monocular ranging principle based on the graph attribute and the vertex of the tag pattern, C, judging whether the distance between the tag pattern and the camera and the deflection angle of the tag pattern relative to the camera meet preset positioning conditions or not, if yes, determining that the visual positioning of a device to be positioned is finished, otherwise, executing step D, moving the robot to a predicted position point according to the distance between the tag pattern and the camera and the deflection angle of the tag pattern relative to the camera, acquiring the image of the tag pattern by utilizing the camera, and then executing steps A to C.
Inventors
- ZHAO YUBIN
- ZHOU HEWEN
- DU YUYANG
- QU SONGSONG
- You Sixia
- HUANG HUIBAO
Assignees
- 中山大学
- 珠海一微半导体股份有限公司
Dates
- Publication Date
- 20260505
- Application Date
- 20230413
Claims (20)
- 1. The visual positioning control method based on the label pattern is characterized by comprising the following steps of: step A, preprocessing an image acquired by a camera of a robot, and searching out graphic attributes and vertexes of a label pattern in the preprocessed image, wherein the label pattern is arranged on the surface of a device to be positioned; Step B, calculating the distance between the tag pattern and the camera and the deflection angle of the tag pattern relative to the camera by using a monocular ranging principle based on the graph attribute and the vertex of the tag pattern; step C, judging whether the distance between the tag pattern and the camera and the deflection angle of the tag pattern relative to the camera meet preset positioning conditions or not, if so, determining that the visual positioning of the device to be positioned is finished, so that the robot is aligned with or contacted with the corresponding tag pattern in the device to be positioned, otherwise, executing the step D; D, according to the distance between the tag pattern and the camera and the deflection angle of the tag pattern relative to the camera, the robot moves to a predicted position point, the camera is used for collecting images of the tag pattern, and then the steps A to C are executed; In the step B, the method for calculating the distance between the tag pattern and the camera and the deflection angle of the tag pattern relative to the camera by using the monocular ranging principle comprises the following steps: The robot sets the target straight line direction as the extending direction of the plane of the object to be measured on the horizontal plane, and sets the target straight line direction to be parallel to the walking plane of the robot; the robot sets two vertexes distributed along the direction of a target straight line in one label pattern as two adjacent target detection points in an object plane to be detected, wherein the distance Ux between the two adjacent target detection points in the object plane to be detected is obtained in advance; Among the two adjacent target detection points, the robot marks one of the two target detection points as a first detection point and calculates the distance dx1 from the side to be detected, where the first detection point is located, to the camera by using the small hole imaging model; based on the corner relationship of the triangle, the deflection angle of the label pattern relative to the camera is calculated by the following formula: ; ; The angle between the perpendicular line segment of the side to be detected, where the camera is located, and the pinhole plane of the camera in the opposite direction of the target straight line direction is denoted as bx11, and the angle between the perpendicular line segment of the side to be detected, where the camera is located, and the pinhole plane of the camera in the second detection point in the opposite direction of the target straight line direction is denoted as bx21; The robot sets the angle of an included angle formed by the plane of the object to be detected and the pinhole plane of the camera in the direction of the target straight line as the inclination angle of the plane of the object to be detected, and ax is the inclination angle of the plane of the object to be detected; The deflection angle of the label pattern relative to the camera comprises an included angle between a perpendicular line segment of the first detection point to the side to be detected and the optical axis, wherein the included angle is bx12 the included angle between the perpendicular line segment of the side to be detected, where the camera is positioned, and the optical axis is recorded as bx22; The distance between the label pattern and the camera in the step B comprises the distance from the edge to be detected where the first detection point is located to the camera and the distance from the edge to be detected where the second detection point is located to the camera, and the monocular ranging principle in the step B comprises a small hole imaging model.
- 2. The visual positioning control method according to claim 1, wherein in the step D, there is: C, if the label patterns participating in the judgment of the step C are all positioning graphic labels, if the distance between the positioning graphic labels and the camera or the deflection angle of the positioning graphic labels relative to the camera is judged to not meet the preset positioning condition, setting the predicted position point by the robot according to the distance between the positioning graphic labels and the camera and the deflection angle of the positioning graphic labels relative to the camera, and moving the predicted position point to the predicted position point, wherein the predicted position point is updated by the robot each time the step D is executed; C, under the condition that the distance between the positioning graphic label and the camera meets the preliminary positioning condition, after the robot finishes executing the step D, starting to identify the target graphic label at the latest set predicted position point, and then configuring the target graphic label into label patterns in the steps B to D; The method comprises the steps of determining that visual positioning of a device to be positioned is completed, wherein the completion of visual positioning of a corresponding label pattern in the device to be positioned comprises the completion of visual positioning of a corresponding label pattern in the device to be positioned, the label pattern is a target graphic label or a positioning graphic label, the corresponding label pattern in the device to be positioned is a target graphic label, and the target graphic label is used for representing an assembly port special for contact of a robot in the device to be positioned.
- 3. The visual positioning control method according to claim 2, wherein before identifying the target graphic label, the robot sequentially traverses each identified positioning graphic label in the process of executing steps a to D to guide the robot to move from the identified positioning graphic labels on both sides to the unidentified area in the middle; The robot identifies a rectangular tag at the center position of the target graphic tag at the predicted position point determined at the last time, determines that the distance between the rectangular tag and the camera meets the preset positioning condition, ensures that the target graphic tag is determined to exist in an unidentified area in the middle, and determines that a plurality of positioning graphic tags are distributed on two sides of the target graphic tag, wherein the target graphic tag is formed by arranging a plurality of rectangular tags; The device to be positioned is a charging seat, the target graphic label is used for representing a label of a charging interface of the charging seat, the vertical direction of a plane where the target graphic label is positioned represents the butt joint direction of the charging interface, and the assembly port special for the contact of the robot in the device to be positioned is the charging interface of the charging seat.
- 4. A visual positioning control method according to claim 3, wherein the label size of one positioning graphic label on the surface of the charging stand is larger than the label size of any one of the rectangular labels included in the target graphic label on the surface of the charging stand, such that: When the robot recognizes the positioning graphic label, the robot does not recognize the rectangular label and further does not recognize the target graphic label, and at the moment, the robot is positioned at a first predicted position point; When the robot recognizes the rectangular label, the robot does not recognize the positioning graphic label, and at the moment, the robot is at a second predicted position point; The distance between the first predicted position point and the charging interface is larger than that between the second predicted position point and the charging interface, and the positioning graphic label does not comprise a rectangular label.
- 5. A visual positioning control method according to claim 3, wherein the robot identifies the positioning graphic label and/or the target graphic label from the plurality of label patterns at one time based on the graphic properties of the label patterns in step a to obtain the vertices of the identified label patterns and the graphic properties of the identified label patterns; Every time a plurality of positioning graphic labels are identified, calculating the distance between each positioning graphic label and the camera and the deflection angle of each positioning graphic label relative to the camera by utilizing a monocular ranging principle based on the graphic attribute and the vertex of each positioning graphic label in the step B; c, when the robot judges that the distances between the two positioning graphic labels which are located on two sides of the middle position and have different placement forms and closest to each other are not the two groups of distances with the smallest numerical value among the distances between the two positioning graphic labels and the cameras, the distance between the currently identified label patterns and the cameras is determined to not meet preset positioning conditions.
- 6. The visual positioning control method according to claim 5, wherein the robot sets predicted position points in front of two positioning graphic labels which are located on both sides of the separated intermediate position and have different placement forms which are closest to each other by performing the step D, and then adjusts the pose according to the deflection angle of the currently set predicted position point relative to the camera and moves to the currently set predicted position point to zoom in the distance between the robot and the target graphic label; And then the robot repeatedly executes the steps A to D until the distance between the two positioning graphic labels which are positioned on two sides of the middle position and have different placement forms and are closest to each other is judged to be the two groups of distances with the smallest numerical value among the distances between each positioning graphic label and the camera, and the distance between each positioning graphic label and the camera is determined to meet the preliminary positioning condition, but the distance between each currently identified label pattern and the camera does not meet the preset positioning condition; Before the robot identifies the target graphic label in the step A, the distance between each currently identified label pattern and the camera is not allowed to meet the preset positioning condition.
- 7. The visual positioning control method according to claim 6, wherein the robot recognizes each rectangular label constituting the target graphic label after recognizing the target graphic label in step a; Then, in the step B, based on the graph attribute and the vertex of each rectangular label, calculating the distance between each rectangular label and the camera and the deflection angle of each rectangular label relative to the camera by utilizing a monocular ranging principle; Then traversing the distances between each recognized rectangular label and the camera in turn; C, when the robot judges that the distance between the rectangular tag and the camera at the center position of the target graphic tag is not the set of distances with the smallest value among the recognized distances between the recognized rectangular tags and the camera, the distance between the currently recognized target graphic tag and the camera is determined to not meet the preset positioning condition; then in step D, the robot moves towards the direction of the rectangular label close to the center position of the target graphic label, the distance between the rectangular label and the camera moving to the center position of the target graphic label is a group of distances with the smallest value among the recognized distances between each rectangular label and the camera, and the distance between the currently recognized target graphic label and the camera is determined to meet the preset positioning condition, wherein the currently moved position point is the latest determined predicted position point; And then, adjusting the moving direction of the robot at the latest determined predicted position point to be parallel to the vertical direction of the rectangular label at the central position of the target graphic label, and determining that the distance between the currently identified target graphic label and the camera and the deflection angle of the currently identified target graphic label relative to the camera meet preset positioning conditions.
- 8. The visual positioning control method according to claim 3, wherein the robot recognizes a positioning graphic label as being composed of a triangle label and determines the side length of the triangle label through the vertex of the triangle label, wherein the placement form of the positioning graphic label set on one side of the target graphic label is different from the placement form of the positioning graphic label set on the other side of the target graphic label; The robot recognizes a target graphic label as being composed of a plurality of identical rectangular label arrays, and determines the side length of the rectangular label through the vertex of the rectangular label, wherein the target graphic label is a regular polygon which is arranged by taking the rectangular label as the center position, and the number of the rectangular labels is distributed in the two adjacent areas of the center position in the target graphic label so as to distinguish the two sides of the target graphic label.
- 9. The visual positioning control method according to claim 8, wherein three rows of rectangular labels are arranged in the target graphic label, three rectangular labels exist in one row passing through the center position, wherein one rectangular label is filled in the center position of the target graphic label and is parallel to the central axis of the charging seat, and the other two rectangular labels are located on two sides of the center position; The middle line is marked as one line passing through the center position, two rectangular labels are arranged on one line above the middle line and are respectively arranged in the same column with the rectangular labels at the center position and the rectangular labels at one side of the middle line, two rectangular labels are arranged on one line below the middle line and are respectively arranged in the same column with the rectangular labels at the center position and the rectangular labels at the other side of the middle line, and the directions of the right side, the right side and the left side of the target graphic label are respectively indicated.
- 10. The visual positioning control method according to claim 8, wherein the method of identifying the positioning graphic tag and/or the target graphic tag from the plurality of tag patterns at one time based on the graphic attribute of the tag patterns comprises: The robot detects the number of edges surrounding a closed graph when searching graph attributes and vertexes of a tag pattern in the preprocessed image, wherein the graph attributes of the tag pattern are edge line characteristics of the closed graph, and the edge line characteristics of the closed graph comprise the number of edges surrounding the closed graph and the number of vertexes of the closed graph; When the robot detects that the number of edges surrounding the closed figure is 3, the currently detected closed figure is identified as a triangle label, and the positioning figure label is determined and identified; And if the robot accumulates the number of the detected rectangular labels in the same frame of image to form the total number of all rectangular labels required by the target graphic label, wherein the accumulated rectangular labels are symmetrically arranged by taking one rectangular label as a central position, and the number of the distributed rectangular labels in the neighborhood of two sides of the central position is different, determining that one target graphic label is identified.
- 11. The visual positioning control method according to claim 10, wherein in the tag pattern, an edge having a certain inclination angle with respect to the target straight line direction is recorded as an edge to be measured of the tag pattern; The method for calculating the distance between the label pattern and the camera by using the small hole imaging model comprises the following steps: the method comprises the steps of obtaining a lens focal length f of a camera, a side length w of a side to be detected and a pixel width p formed by the side to be detected in an imaging plane of the camera in advance; the distance between the edge to be measured and the camera is calculated by the following formula: if one end point of the edge to be detected is the first detection point, the edge to be detected is the edge to be detected where the first detection point is located, and d is set to be equal to the distance dx1 from the camera to the edge to be detected where the first detection point is located; If one end point of the edge to be detected is the second detection point, the edge to be detected is the edge to be detected where the second detection point is located, and d is set to be equal to the distance dx2 from the camera to the edge to be detected where the second detection point is located; when the plane of the object to be measured is not parallel to the pinhole plane of the camera, dx1 is not equal to dx2, and the intersecting line of the plane of the object to be measured and the pinhole plane of the camera is perpendicular to the direction of the target straight line.
- 12. The visual positioning control method according to claim 11, wherein when the label pattern is represented as a triangle label, the two adjacent target detection points are two vertexes of a base of the triangle label, wherein the target detection points corresponding to each triangle label are distributed in the plane of the object to be measured along the direction of the target straight line; the connecting line of two vertexes in a triangular label, which are distributed along a first included angle with the target straight line direction, is marked as an edge to be detected where a first detection point is located, and the connecting line of two vertexes in the same triangular label, which are distributed along a second included angle with the target straight line direction, is marked as an edge to be detected where a second detection point is located, wherein the sum of the angle of the second included angle and the first included angle is equal to 180 DEG; When the label pattern comprises a rectangular label, the two adjacent target detection points are two vertexes in the side, parallel to the target straight line direction, of the rectangular label, and the two sides, perpendicular to the target straight line direction, of the rectangular label are the side to be detected where the first detection point is located and the side to be detected where the second detection point is located, wherein the side to be detected where the first detection point is located and the side to be detected where the second detection point is located are both parallel to the pinhole plane of the camera.
- 13. The visual positioning control method according to claim 12, wherein in the step D, the robot selects a region formed by the smallest included angle from included angles formed by vertical line segments passing through the cameras to sides of the sides to be detected of the triangle labels with different recognized placement forms on both sides of the unrecognized region, and sets the predicted position point in the region formed by the smallest included angle currently selected, so that the distance between the two triangle labels with different placement forms and closest placement forms, which are located on both sides of the middle position, of the robot is determined by the step C at the predicted position point, is two groups of distances with smallest values among the distances between each triangle label with recognized and the cameras; When the step D is executed each time, the included angle with the smallest angle selected by the robot is updated, so that the predicted position point is updated, and the robot is guided to get close from the recognized triangle labels on the two sides to the unidentified area in the middle; The distance between one triangle label and the camera comprises the distance between the camera and the edge to be detected where the first detection point of the triangle label is located and the distance between the camera and the edge to be detected where the second detection point of the triangle label is located, so that a group of distances of one triangle label are formed.
- 14. The visual positioning control method according to claim 12, wherein, in the course of moving the robot in the direction of the rectangular label located at the center position of the target graphic label by executing the step D after the robot recognizes the rectangular label in the target graphic label, if the robot is detected to be located at the left side of the rectangular label located at the center position of the target graphic label according to the deflection angle of the current recognized rectangular label with respect to the camera, the robot moves to the right side to the new predicted position point, or if the robot is detected to be located at the right side of the rectangular label located at the center position of the target graphic label according to the deflection angle of the current recognized rectangular label with respect to the camera, the robot moves to the left side to the new predicted position point, until it is judged at the newly determined predicted position point that the distance between the rectangular label located at the center position of the target graphic label and the camera is the smallest set of the distances between the recognized rectangular labels and the camera, and it is determined that the distance between the current recognized target graphic label and the camera satisfies the preset positioning condition; the distance between one rectangular tag and the camera comprises the distance between the camera and the edge to be detected where the first detection point of the rectangular tag is located and the distance between the camera and the edge to be detected where the second detection point of the rectangular tag is located, so that a group of distances of the rectangular tag are formed; The distance between the target graphic label and the camera includes the distance between all rectangular labels and the camera required to compose the target graphic label.
- 15. The vision positioning control method according to claim 14, wherein after the robot determines that the distance between the currently recognized target graphic label and the camera satisfies the preset positioning condition, the robot adjusts the optical axis of the camera to be perpendicular to the plane of the object to be measured in a rotating manner, so that the moving direction of the robot becomes parallel to the perpendicular direction of the rectangular label at the center position of the target graphic label, and further determines that the deflection angle of the currently recognized target graphic label with respect to the camera satisfies the preset positioning condition.
- 16. The visual positioning control method according to claim 14, wherein if the robot detects that one side of the tag pattern is not parallel to the pinhole plane of the camera, it is determined that the side not parallel to the pinhole plane of the camera is distorted in the camera, and further it is determined that the tag pattern is not parallel to the pinhole plane of the camera, and the tag pattern is distorted in the camera; If the robot detects that one side of the label pattern is parallel to the pinhole plane of the camera, the robot confirms that the side parallel to the pinhole plane of the camera does not generate distortion in the camera.
- 17. The vision positioning control method according to claim 16, characterized in that when the robot recognizes the adjacent two target detection points from the object plane to be measured, a product Ux of a distance Ux between the adjacent two target detection points in the object plane to be measured and a sine value of an inclination angle of the object plane to be measured is calculated Sin (ax), and Ux Sin (ax) is set as a ranging error generated by distortion of a connecting line of two adjacent target detection points in the camera.
- 18. The visual positioning control method according to claim 17, wherein when the robot detects that the side to be detected perpendicular to the target straight line direction is parallel to the pinhole plane of the camera in the rectangular tag in the case that the plane of the object to be detected is not parallel to the pinhole plane of the camera, the robot determines that a ranging error generated by distortion of the side to be detected in the rectangular tag, which is parallel to the target straight line direction, in the camera is not equal to a value of 0, and a ranging error generated by distortion of the side to be detected in the rectangular tag in the camera is equal to a value of 0; the to-be-detected edges in the rectangular tag comprise to-be-detected edges where the first detection points are located and to-be-detected edges where the second detection points are located.
- 19. The visual positioning control method according to claim 16, wherein a distance between the adjacent two target detection points obtained by the robot is less than 0.5 (Dx1+dx2), the robot sets the ranging error generated by the distortion of the link between the adjacent two target detection points in the camera to be equal to the value 0.
- 20. The visual positioning control method according to claim 16, wherein if two vertexes of an edge to be detected where the first detection point is located are updated to the two adjacent target detection points, an edge originally distributed along the target straight line direction in the same triangle label is set as an edge to be detected where the updated one target detection point is located, and an edge to be detected where the original second detection point is located is set as an edge to be detected where the updated other target detection point is located, so that the target straight line direction is updated to be parallel to the edge to be detected where the original first detection point is located; When the distance between two adjacent updated target detection points is less than 0.5 When (dx1+dx2), the robot sets the distance measurement error generated by the distortion of the updated connecting line of the two adjacent target detection points in the camera to be equal to the value 0, wherein the preset distortion error value is equal to 0.5 (dx1+dx2)。
Description
Visual positioning control method based on label pattern Technical Field The application relates to the technical field of monocular vision pose measurement, in particular to a vision positioning control method based on a label pattern. Background Visual ranging is one of important technologies in the field of robots, and has wide application in aspects of visual positioning, target tracking, visual obstacle avoidance and the like. The common visual ranging method has the advantages of monocular ranging, simple monocular ranging structure and high operation speed, thereby having wide application prospect. The monocular distance measurement technology disclosed in the prior art generally simplifies a monocular vision system into a camera projection model, establishes a distance measurement model through a geometric relation derivation method, obtains a conversion relation between an image coordinate and a world coordinate system, and finally calculates through a geometric relation (comprising a small hole imaging model and a geometric proportion relation of a similar triangle) to realize measurement and calculation of an obstacle distance. Disclosure of Invention The application provides a visual positioning control method based on a label pattern, which comprises the following specific technical scheme: A visual positioning control method based on a label pattern comprises the steps of A, preprocessing an image acquired by a camera of a robot, searching the vertex of the label pattern and the graph attribute of the label pattern in the preprocessed image, wherein the label pattern is arranged on the surface of a device to be positioned, B, calculating the distance between the label pattern and the camera and the deflection angle of the label pattern relative to the camera by utilizing a monocular ranging principle based on the graph attribute of the label pattern and the vertex of the label pattern, C, judging whether the distance between the label pattern and the camera and the deflection angle of the label pattern relative to the camera meet preset positioning conditions or not, if so, determining that the visual positioning of the device to be positioned is completed, so that the robot is aligned with or contacts the corresponding label pattern in the device to be positioned, otherwise, executing step D, acquiring the image of the label pattern by utilizing the camera according to the distance between the label pattern and the camera and the deflection angle of the label pattern relative to the camera, and then executing steps A to C. Compared with the prior art, the method has the advantages that the robot uses the vertexes of the identified tag patterns to calculate the distance and angle information of the corresponding tag relative to the same camera, further predicts the relative position relation between various tag patterns and the same camera (which can be positioned at different positions), provides accurate positioning information for the robot to navigate back to the device to be positioned and perform butt contact, does not need to spend excessive time to collect multiple frames of images at the same position, calculates the distance and angle information of the identified tag patterns by using a monocular ranging principle in the collected same frame of images, introduces preset positioning conditions, and corrects the direction of the robot, which tends to be matched with the tag patterns, in the process of continuously calculating and updating the distance between the tag patterns and the camera and the deflection angle of the tag patterns relative to the camera, so that the robot approaches to the device to be positioned, the visual positioning accuracy of the camera is improved, the ranging error influence of the camera is restrained, the navigation accuracy is improved, and the real-time positioning calculation amount is reduced. Further, in the step D, if the distance between the positioning graphic label and the camera or the deflection angle of the positioning graphic label relative to the camera is not satisfied under the condition that the label patterns judged in the step C are positioning graphic labels, the robot sets the predicted position point and moves to the predicted position point according to the distance between the positioning graphic label and the camera and the deflection angle of the positioning graphic label relative to the camera, wherein the robot updates the preset position point every time the robot executes the step D, after the step C judges that the distance between the positioning graphic label and the camera satisfies the preliminary positioning condition, the robot starts to identify the target graphic label at the predicted position point which is set up most recently, and then configures the target graphic label into the label patterns in the step B to the step D, wherein the step B is to determine that the visual positioning of the device to be positio