Search

CN-115930988-B - Visual odometer method, device, equipment and storage medium

CN115930988BCN 115930988 BCN115930988 BCN 115930988BCN-115930988-B

Abstract

The invention discloses a visual odometer method, a device, equipment and a storage medium, wherein the method comprises the steps of extracting a plurality of line segment information contained in an image to be processed by utilizing an LSD algorithm, screening a plurality of strong gradient line segments from the plurality of line segment information by a quadtree homogenization screening method based on a gradient strength grading principle, uniformly sampling each strong gradient line segment to obtain a plurality of sampling points, and carrying out back projection on the plurality of sampling points to obtain a space point set so as to calculate a covariance of the space point set. In the homogenization process, the gradient intensity mean value of the pixel points contained in the line segments is calculated, the line segments with strong linear constraint are screened out for the dot line model constructed by the algorithm, and the algorithm robustness brought by introducing the linear constraint is improved.

Inventors

  • CHEN WEI
  • WU YONGCUN
  • HAO YUNGANG
  • ZENG KAN
  • ZHU SONGBAI
  • HU XIN
  • XIANG XUEFU
  • ZHANG ZHENYU
  • WANG HAN
  • TIAN RUIJUAN

Assignees

  • 中国兵器装备集团自动化研究所有限公司

Dates

Publication Date
20260512
Application Date
20221202

Claims (8)

  1. 1. A method of visual odometry, comprising: Extracting a plurality of line segment information contained in an image to be processed by using an LSD algorithm; based on a gradient strength grading principle, a plurality of strong gradient line segments are obtained by screening in a four-way tree homogenization screening method from a plurality of line segment information, wherein the gradient strength grading principle comprises that the average gradient strength of the line segments meets a preset strength threshold, and the four-way tree homogenization screening method comprises the following steps: Let the current image The detected line segment set is Preserving image gradient information in The gradient-valued operation is defined as Each time screening is performed, the slave line segment is fixed Upward selection Point and complete segment screening by the following screening model: uniformly sampling each strong gradient line segment to obtain a plurality of sampling points, and carrying out back projection on the plurality of sampling points to obtain a space point set so as to calculate covariance of the space point set; judging whether covariance elements of the corresponding spatial point sets of the sampling points contained in each strong gradient line segment meet preset conditions or not, and fitting the spatial point sets corresponding to the sampling points of all the strong gradient line segments meeting the preset conditions through a linear fitting technology to obtain a spatial line segment model; And constructing a luminosity error based on the luminosity invariant model, constructing a dotted line error based on the linear constraint model, and performing mile calculation on a target error model, wherein the target error model comprises luminosity error constraint, first collinear constraint and second collinear constraint.
  2. 2. The visual odometer method of claim 1, wherein the image to be processed is subjected to image photometric distortion removal to obtain a target image, and wherein the plurality of line segment information contained in the target image is extracted by using an LSD algorithm.
  3. 3. The method according to claim 1, wherein said determining whether the covariance element of the set of spatial points corresponding to the sampling points included in each of the strong gradient line segments satisfies a preset condition includes: Three covariance elements of the space point set corresponding to the three sampling points contained in each strong gradient line segment are obtained, and the ratio of one covariance element to the sum of the three covariance elements is calculated; Judging whether the ratio is larger than a preset coefficient or not, and determining that the ratio is larger than the coefficient and the preset condition is met.
  4. 4. A visual odometer method according to claim 3, wherein the strong gradient line segments corresponding to the ratio not greater than a predetermined coefficient are eliminated.
  5. 5. The visual odometry method of claim 1, wherein the performing odometry calculations on the target error model includes: and carrying out least square solution on the target error model through an LM algorithm to obtain pose transformation and the corresponding inverse depth of the pixel point.
  6. 6. A visual odometer apparatus for performing the visual odometer method of any of claims 1-5, the apparatus comprising: The line segment information extraction unit is used for extracting a plurality of line segment information contained in the image to be processed by utilizing an LSD algorithm; the strong gradient line segment screening unit is used for screening and obtaining a plurality of strong gradient line segments from a plurality of line segment information by a quadtree homogenization screening method based on a gradient intensity grading principle; The sampling point acquisition unit is used for uniformly sampling each strong gradient line segment to obtain a plurality of sampling points, and carrying out back projection on the plurality of sampling points to obtain a space point set so as to calculate covariance of the space point set; The space line segment model fitting unit is used for judging whether covariance elements of the sampling point corresponding space point sets contained in each strong gradient line segment meet preset conditions or not, and fitting the sampling point corresponding space point sets of all the strong gradient line segments meeting the preset conditions through a linear fitting technology to obtain a space line segment model; And the resolving unit is used for constructing a luminosity error based on the luminosity invariant model, constructing a dotted line error based on the linear constraint model, and performing mileage calculation on the target error model, wherein the target error model comprises a luminosity error constraint, a first collinear constraint and a second collinear constraint.
  7. 7. A visual odometer device, comprising a processor and a memory: The memory is used for storing program codes and transmitting the program codes to the processor; the processor is configured to perform the visual odometry method of any of claims 1-5 according to instructions in the program code.
  8. 8. A computer readable storage medium, characterized in that the computer readable storage medium is for storing a program code for performing the visual odometry method of any of claims 1-5.

Description

Visual odometer method, device, equipment and storage medium Technical Field The invention relates to the technical field of robot positioning and navigation, in particular to a visual odometer method, a device, equipment and a storage medium. Background The simultaneous localization and mapping (SLAM: simultaneous Location AND MAPPING) technology is one of the core technologies in the hot directions of mobile robots, autopilot, virtual/augmented reality, etc. Currently, SLAM technology can be generally classified into a vision SLAM using a camera as a main device and a laser SLAM using a laser radar as a main device. The laser radar can directly acquire high-precision space point cloud information, but lacks environmental textures, and the high-precision laser radar is high in price. In contrast, vision cameras are low cost, low power consumption, easy to integrate, and contain rich image textures. In general, the visual SLAM technology comprises the following technical modules of a front-end visual odometer, a rear-end optimization, loop detection and image construction. The visual odometer technology can directly solve the incremental motion information of the camera through the adjacent image information, and is the most critical ring in the visual SLAM technology. The visual odometer generally adopts two forms of an indirect method and a direct method, wherein the indirect method is used for performing feature matching on an image by extracting image feature points and calculating feature descriptors to complete data association operation of the image, so as to construct a geometric re-projection error model to solve visual incremental motion, and the direct method is used for directly comparing pixel gray differences of two images, completing visual projection and luminosity residual error model construction under the assumption of unchanged luminosity and solving visual incremental motion. The indirect method generally requires more computing resources, and will be difficult to work in a weak texture environment where image features are difficult to extract. In contrast, the direct method skips pixel feature calculation, can be used for solving camera incremental motion in a weak texture environment as well, and the luminosity information utilized by the direct method can be fused with an edge detection technology, so that the visual odometer solving is completed more robustly. DSO (Dorect Sparse Odometry, direct sparse mileometer) is a sparse direct method-based visual mileometer scheme, and the scheme can keep the same or even higher precision with the traditional indirect method, and the processing speed is five times as high as that of the traditional indirect method, so that the DSO (Dorect Sparse Odometry, direct sparse mileometer) is one of the direct method visual SLAM schemes currently mainstream. The DSO comprises a front end tracking part and a back end optimizing part, wherein the front end tracking part completes the system initialization and tracking flow based on a direct method, and the back end optimizing part carries out depth filtering on image points based on a front end tracking result and constructs window constraint optimizing system state variables. The DSO is different from the traditional scheme based on the direct method in that a photometric error calibration model is introduced, so that photometric influence caused by illumination change and lens attenuation can be reduced to a great extent, and the robustness of the photometric error model constructed by the direct method can be improved well. The DSO belongs to a sparse direct method visual odometer, and has high algorithm solving efficiency based on a direct method solving model, but compared with a characteristic point mode utilized by an indirect method, the direct method does not utilize any environment structure information, and the solving of a luminosity error model is completed only on the basis of strong-dependence luminosity invariant assumption, so that error accumulation is inevitably generated even under the assistance of a luminosity error calibration model. Therefore, how to overcome the defect of insufficient robustness of solving the visual incremental motion by adopting a direct method in the simultaneous localization and mapping (SLAM) technology is a technical problem which needs to be solved by the skilled person. Disclosure of Invention In view of the above, the present invention provides a visual odometer method, apparatus, device and storage medium for overcoming or at least partially solving the above problems. The method aims to perform real-time positioning calculation on the robot provided with the monocular camera and improve the robustness of the monocular visual odometer technology in a structured scene. The invention provides the following scheme: A visual odometer method, comprising: Extracting a plurality of line segment information contained in an image to be processed by using an LSD