CN-116105715-B - Method and system for establishing and positioning vision fusion laser
Abstract
The invention provides a method and a system for building and positioning a vision fusion laser, wherein the method comprises the steps of S1, obtaining first pose data of laser equipment and first vision data of vision equipment, building and updating a vision map according to the first scanning data and the first vision data, S2, obtaining second scanning data of the laser equipment and second vision data of the vision equipment, and carrying out joint positioning according to the second pose data, the second vision data and the vision map. The scheme calculation force requirement of the invention is obviously smaller than that of the existing method, the deployment is easy, and the success rate of the existing laser positioning can be improved.
Inventors
- LI XIN
- ZENG LINGBING
- ZHU YOUJI
- FENG ZIJIAN
- SHI XUESONG
- QIN BAOXING
- CHENG HAOTIAN
Assignees
- 上海高仙自动化科技发展有限公司
Dates
- Publication Date
- 20260505
- Application Date
- 20230210
Claims (7)
- 1. The method for creating and positioning the vision fusion laser is applied to a robot and is characterized by comprising the following steps of: s1, acquiring first pose data of a laser device and first visual data of a visual device, and constructing and updating a visual map according to the first pose data and the first visual data; s2, acquiring second pose data of the laser equipment and second visual data of the visual equipment, and performing joint positioning according to the second pose data, the second visual data and the visual map; Wherein, step S1 includes: S11, correlating the first pose data with the first visual data; S12, determining a current grid of the robot according to the first pose data, and storing the first visual data in the current grid; Wherein, step S12 includes: s121, judging whether visual data exist in the current grid, if so, turning to S122, otherwise turning to S123; S122, updating the visual data by using the first visual data; s123, adding the first visual data to the grid; Wherein, step S122 includes: s1221, judging whether the covariance of the first pose data is smaller than 1, if yes, turning to S1222, otherwise turning to S1226; s1222, judging whether the current first visual data detects loop-back, if so, turning to S1223, otherwise turning to S1227; S1223, judging whether the difference between the current first pose data and the third pose data of the loop is larger than a threshold value, if so, turning to S1224, otherwise turning to S1225; S1224, judging that the current first visual data and the third visual data of the loop are not in the same position, and storing the first visual data into the grid; s1225, judging that the current first visual data loops successfully, and updating the time stamp of the first visual data in the grid to the current time; S1226, judging that the current first pose data is not trusted, and not adding the first visual data to the grid; and S1227, determining that the current first visual data is a picture which is not in the grid, and adding the current first visual data to the grid.
- 2. The method for mapping and positioning a vision fusion laser as set forth in claim 1, wherein said grid stores image global feature point descriptors, image local feature point descriptors, and said first pose data associated with said first vision data.
- 3. The method for mapping and positioning a vision fusion laser according to claim 1, wherein in step S2, the joint positioning is performed according to the second pose data, the second vision data and the vision map, including: S21, extracting visual pose from the second visual data, and calculating the confidence coefficient of the visual pose; s22, under the condition that the confidence degree meets a threshold value condition, taking the visual pose as an initial value, and carrying out matching calculation on the second pose data and the visual pose; S23, when the matching calculation result meets a threshold value condition, performing back-end graph optimization.
- 4. The method for mapping and positioning a vision fusion laser according to claim 3, wherein in step S21, the vision pose is extracted from the second vision data, and the confidence level of the vision pose is calculated, including: s211, extracting global feature point descriptors and local feature point descriptors of the second visual data, determining a plurality of candidate image frames according to the global feature point descriptors, and calculating feature point matching rates of the local feature point descriptors for the candidate image frames; s212, if the feature point matching rate meets the threshold condition, calculating an Essential matrix and calculating the intra-polar point rate; S213, decomposing the Essential matrix to recover the real rotation; S214, when three-dimensional feature points exist in the current grid map, calculating to obtain the visual pose according to the three-dimensional information of the three-dimensional feature points and PnP, and calculating to obtain the confidence of the visual pose according to the feature point matching rate, the polar line internal point rate and the global feature point descriptor distance.
- 5. A diagram construction and positioning system of vision fusion laser comprises a diagram construction module and a positioning module; The map building module is used for obtaining first pose data of the laser equipment and first visual data of the visual equipment, and building and updating a visual map according to the first pose data and the first visual data; the positioning module is used for acquiring second pose data of the laser equipment and second visual data of the visual equipment, and performing joint positioning according to the second pose data, the second visual data and the visual map; the mapping module comprises a data association sub-module and a data storage sub-module; the data association sub-module is used for associating the first pose data with the first visual data; the data storage submodule is used for determining a grid where the robot is currently located according to the first pose data, and storing the first visual data in the grid where the robot is currently located; The data storage submodule comprises a visual data judging unit, a visual data updating unit and a visual data adding unit; the visual data judging unit is used for judging whether visual data exists in the grid where the current grid is located or not, if yes, turning to the visual data updating unit, otherwise, turning to the visual data adding unit; the visual data updating unit is used for updating the visual data by using the first visual data; The visual data adding unit is used for adding the first visual data to the grid; The visual data updating unit comprises a first judging subunit, a second judging subunit, a third judging subunit, a visual data storage subunit, a time stamp updating subunit, a visual data removing subunit and a visual data adding subunit; The first judging subunit is configured to judge whether the covariance of the first pose data is less than 1, if yes, switch to the second judging subunit, and otherwise switch to the visual data excluding subunit; the second judging subunit is used for judging whether the current first visual data detects loop-back or not, if so, turning to the third judging subunit, and otherwise turning to the visual data adding subunit; The third judging subunit is configured to judge whether a difference between the current first pose data and the third pose data of the loop is greater than a threshold, if yes, switch to the visual data storage subunit, and if not, switch to the timestamp updating subunit; a visual data storage subunit, configured to determine that the current first visual data and the third visual data of the loop are not in the same position, and store the first visual data into the grid; a time stamp updating subunit, configured to determine that current looping of the first visual data is successful, and update a time stamp of the first visual data in the grid to a current time; A visual data exclusion sub-unit configured to determine that the current first pose data is not authentic, and not to add the first visual data to the grid; And the visual data adding subunit is used for judging that the current first visual data is a picture which is not in the grid and adding the current first visual data into the grid.
- 6. An electronic device, comprising: A memory storing executable program code; A processor coupled to the memory; -wherein said processor invokes said executable program code stored in said memory to perform the method according to any one of claims 1-4.
- 7. A computer storage medium having a computer program stored thereon, characterized in that the computer program when run by a processor performs the method according to any of claims 1-4.
Description
Method and system for establishing and positioning vision fusion laser Technical Field The invention relates to the field of robots, in particular to a method, a system, electronic equipment and a computer storage medium for establishing and positioning a vision and laser fusion map. Background The existing visual mapping and positioning technology needs to build a three-dimensional point cloud model on line or off line, has high calculation force requirement and is complex to deploy. The existing laser mapping and positioning technology is easy to fail in positioning under the conditions of large initial position deviation, large flow of people, structural environment change and the like, and is difficult to meet actual requirements. Disclosure of Invention In order to at least solve the technical problems in the background art, the invention provides a method, a system, electronic equipment and a computer storage medium for establishing and positioning a vision and laser fusion map, so as to reduce calculation force requirements and deployment difficulty and improve the success rate of the existing laser positioning. The first aspect of the invention provides a method for creating and positioning a vision fusion laser, which is applied to a robot, and comprises the following steps: s1, acquiring first pose data of a laser device and first visual data of a visual device, and constructing and updating a visual map according to the first pose data and the first visual data; S2, acquiring second pose data of the laser equipment and second visual data of the visual equipment, and performing joint positioning according to the second pose data, the second visual data and the visual map. Further, in step S1, constructing and updating the visual map according to the first pose data and the first visual data includes: s11, associating the first pose data with the first visual data; S12, determining a grid where the robot is currently located according to the first pose data, and storing first visual data in the grid where the robot is currently located. Further, in step S12, determining a current grid of the robot according to the first pose data, and storing the first visual data in the current grid, including: S121, judging whether visual data exist in the current grid, if so, turning to S122, otherwise turning to S123; s122, updating the visual data by using the first visual data; and S123, adding the first visual data to the grid. Further, stored in the grid are an image global feature point descriptor, an image local feature point descriptor, and first pose data associated with the first visual data. Further, in step S122, updating the visual data using the first visual data includes: S1221, judging whether the covariance of the first pose data is less than 1, if yes, turning to S1222, Otherwise go to S1226; s1222, judging whether the current first visual data detects loop, if yes, turning to S1223, otherwise turning to S1227; s1223, judging whether the difference between the current first pose data and the third pose data of the loop is larger than a threshold value, if so, turning to S1224, otherwise turning to S1225; s1224, judging that the current first visual data and the third visual data of the loop are not in the same position, and storing the first visual data into a grid; S1225, judging that the current first visual data loops successfully, and updating the time stamp of the first visual data in the grid to the current time; S1226, judging that the current first pose data is not credible, and not adding the first visual data into the grid; s1227, determining that the current first visual data is a picture that is not in the grid, and adding the current first visual data to the grid. Further, in step S2, joint positioning is performed according to the second pose data, the second visual data, and the visual map, including: s21, extracting visual pose from the second visual data, and calculating the confidence coefficient of the visual pose; S22, under the condition that the confidence coefficient meets the threshold value condition, taking the visual pose as an initial value, and carrying out matching calculation on second pose data and the visual pose; s23, when the matching calculation result meets the threshold condition, performing back-end graph optimization. Further, in step S21, a visual pose is extracted from the second visual data, and a confidence level of the visual pose is calculated, including: s211, extracting global feature point descriptors and local feature point descriptors of the second visual data, determining a plurality of candidate image frames according to the global feature point descriptors, and calculating feature point matching rates of the local feature point descriptors for the candidate image frames; s212, if the feature point matching rate meets the threshold condition, calculating an Essential matrix and calculating the intra-polar point rate; S213, decomposing the Ess