CN-121120816-B - Visual image construction method in low-light environment
Abstract
The invention provides a visual image construction method in low light environment, which relates to the technical field of visual image construction, the invention separates brightness channels in an input image and divides the brightness channels into a plurality of tiles, calculates local statistics of each tile to adaptively generate contrast limiting parameters and enhancement parameter vectors, carries out CLAHE enhancement on the brightness channels based on the parameters, extracts candidate points corresponding to the existing map feature points one by one after reconstructing the image, and calculating comprehensive confidence scores of the map points, judging whether the distance between the candidate points and the enhancement parameter vectors of the corresponding map points is lower than a dynamic threshold value, determining whether the candidate points are adopted as new map points by combining the confidence scores, and finally dynamically optimizing enhancement parameters by the system according to the matching survival rate of the characteristic points of each tile, and continuously improving the quality and stability of the map construction by iterative updating to realize robust vision map construction in a low-light environment.
Inventors
- ZOU JIEFENG
- ZHANG CHAOXIA
Assignees
- 佛山大学
Dates
- Publication Date
- 20260512
- Application Date
- 20250827
Claims (6)
- 1. The visual image construction method in low light environment is characterized by comprising the following specific steps: step 1, separating a brightness channel from a current input image in an HSV color space, dividing the brightness channel into a plurality of tiles, calculating local statistics of the tiles, and generating contrast limiting parameters of each tile based on the local statistics; Step 2, applying a CLAHE algorithm based on contrast limiting parameters to obtain an enhanced brightness channel, combining the enhanced brightness channel with an original chromaticity channel to reconstruct an enhanced image, screening candidate feature points which are in one-to-one correspondence with feature points in a visual map from the enhanced image, combining the contrast limiting parameters and local statistics to generate enhanced parameter vectors of tiles where the candidate feature points are located, and calculating comprehensive confidence scores of the candidate feature points; Step 3, if any candidate feature point and the corresponding feature point meet the matching condition, taking the candidate feature point and the corresponding feature point as map points, wherein the matching condition comprises that the vector distance between the tile where the candidate feature point is positioned and the enhancement parameter vector of the tile where the corresponding feature point is positioned is lower than a dynamic threshold value, and the comprehensive confidence score of the candidate feature point exceeds a preset scoring threshold value; Step 4, generating the matching survival rate of each tile based on the map points and the candidate feature points, optimizing the brightness channel of the current input image until the matching survival rate exceeds a survival rate threshold value, and replacing the optimized candidate feature points with the corresponding feature points in the visual map, so as to obtain a final visual map; logic for generating a matching survival rate for each tile based on the map points and the candidate feature points is: Definition of the definition To enhance the total number of all candidate feature points generated for tile k in the image, For the number of map points in which candidate feature points are successfully matched and adopted, the matching survival rate The definition is as follows: When (when) Below the survival threshold When the tile is in the attenuated form, updating the contrast limiting parameter of the tile: Wherein, the In order for the attenuation factor to be a factor, In order to optimize the contrast clipping parameters after the optimization, Limiting parameters for contrast before optimization; Based on optimized contrast clipping parameters Re-executing CLAHE algorithm to enhance brightness channel of current and subsequent input images, and re-executing steps 2, 3 and 4 until Generating a new set of candidate feature points; The optimized candidate feature points are replaced with the corresponding feature points in the visual map, and the method comprises the following steps that for the matching survival rate, the survival rate is improved and exceeds a survival rate threshold value And (3) replacing the new candidate feature points which meet the matching condition with the old feature points originally corresponding to the visual map, and updating the descriptors and the three-dimensional position information of the related map points, thereby completing the iterative optimization of the visual map.
- 2. The method for visual mapping in low light environment according to claim 1, wherein the logic for dividing the luminance channel into tiles and calculating the local statistic thereof is as follows: Dividing a brightness channel image into M multiplied by N rectangular areas with the same size as tiles, wherein the overlapping degree between the tiles is 0%; For each divided tile, calculating local statistic, wherein the local statistic comprises gray value mean value and gray value standard deviation of all pixels in the tile, and the local statistic is used for representing brightness level and contrast fluctuation condition of the area.
- 3. The method for visual mapping in low light environment according to claim 2, wherein the logic for applying the CLAHE algorithm to obtain the enhanced luminance channel based on the contrast clipping parameters is as follows: The contrast clipping parameters are generated based on local noise levels calculated from local statistics, which noise levels are defined as for the kth tile The contrast clipping parameter is defined as The specific calculation formula is as follows: Wherein, the And (3) with The mean and standard deviation of the pixel values of the kth tile, Is a correction constant with a positive value, For the noise level of the kth tile, For the contrast clipping parameters for the kth tile for CLAHE, As a limiting function, the representation is to Is limited in the interval In, i.e. make The range of the value of (2) is within In, k is the index of the tile.
- 4. The method for visual mapping in low-light environment according to claim 3, wherein the logic for screening candidate feature points corresponding to feature points in the visual mapping one by one from the enhanced image is as follows: Contrast clipping parameters based on tiles The enhanced image is reconstructed by combining the enhanced brightness channel with the original chromaticity channel by using a CLAHE algorithm, the original image in the visual building is subjected to characteristic point extraction by using a FAST characteristic point detection algorithm to obtain a characteristic point set, and the enhanced image is subjected to characteristic point extraction by using the same FAST characteristic point detection algorithm to obtain a candidate characteristic point set; For each feature point in the visual map, selecting a spatially adjacent candidate feature point subset in a circular local search window with a preset radius in the enhanced image according to the predicted projection position of the feature point in the image, calculating the descriptor distance between the feature point and each feature point in the candidate feature point subset, performing preliminary matching screening by adopting a nearest neighbor and next nearest neighbor ratio method, and performing RANSAC-based basic matrix geometric verification on the basis of the descriptor matching pair; only after the candidate feature points pass through position constraint, descriptor matching and geometric verification, a one-to-one correspondence relation with the original feature points is established.
- 5. The method for visual mapping in a low light environment of claim 4, wherein the logic for calculating the integrated confidence score for each candidate feature point is: for candidate feature points p, calculate their comprehensive confidence scores The scoring is obtained by weighted combination of normalized FAST angular point response values and normalized gradient amplitude values, and a specific calculation formula is as follows: Wherein, the The value of the FAST corner response value at the candidate feature point p after normalization processing is obtained by calculating the maximum value of the gray difference between the candidate feature point p and surrounding pixels, For the value of the image gradient amplitude at the candidate feature point p after the predefined normalization processing, the gradient amplitude is synthesized by the gradient components in the x and y directions obtained by the calculation of the Sobel operator, For the weight coefficient, the method is used for adjusting the contribution ratio of the FAST response value and the gradient amplitude in the comprehensive score, and the value range is P is the index of the candidate feature points; Combining the contrast clipping parameters and the local statistics to generate enhancement parameter vectors for the tiles in which the candidate feature points reside, i.e., enhancement parameter vectors for the kth tile The contrast clipping parameters and their local statistics for the kth tile, defined as: 。
- 6. The method for visual mapping in low-light environment according to claim 5, wherein if any candidate feature point and its corresponding feature point satisfy the matching condition, the logic for using the candidate feature point as the map point is: If any candidate feature point meets the matching condition and is adopted as the map point, the enhancement parameter vector of the tile where the candidate feature point is located is calculated Comprehensive confidence score for the candidate feature point Binding and storing the map points serving as metadata; the matching condition comprises that the vector distance between the tile where the candidate feature point is located and the enhancement parameter vector of the tile where the corresponding feature point is located is lower than a dynamic threshold, and the specific logic is as follows: Calculating weighted Euclidean distance between enhanced parameter vector of tile where current candidate feature point is located and historical enhanced parameter vector stored in metadata of map point bound with corresponding original feature point And compares the distance with a dynamic threshold A comparison is made to determine satisfaction, defined as follows: Wherein, the The enhancement parameter vector for the kth tile where the current candidate feature point is located, For the enhancement parameter vector of the tile where the candidate feature point corresponds to the map point, For the diagonal weight matrix, As a reference threshold value, In order to scale the coefficient of the power consumption, The degree of fluctuation of the parameter vector in the spatial neighborhood where the tile is located is: Wherein, the For a set of spatial neighborhood tile indices centered on the current kth tile, For the number of tiles in the spatial neighborhood tile index set, For the arithmetic mean of all enhancement parameter vectors within the spatial neighborhood tile index set, Enhancement parameter vector for the j-th tile in the spatial neighborhood tile index set, j being the index of the spatial neighborhood tile index set; When the vector distance criterion is And when the comprehensive confidence score of the map point binding exceeds a preset scoring threshold, the candidate feature points are used as map points when the two criteria are met at the same time.
Description
Visual image construction method in low-light environment Technical Field The invention relates to the technical field of visual mapping, in particular to a visual mapping method in a low-light environment. Background In key applications such as autonomous navigation of unmanned systems, underground space exploration and night security monitoring, a vision mapping technology is required to continuously and stably run in a low-illumination environment, in such environments, the signal to noise ratio of data captured by an image sensor is seriously reduced due to the fact that illumination intensity is obviously insufficient, a characteristic point detection module relied on by a traditional vision mapping method is rapidly degraded due to low overall contrast of images, fuzzy texture characteristics and aggravated noise interference, and further pose estimation drifting is caused even map construction process interruption, so that the technology capable of stably and robustly completing map construction in a low-illumination environment is realized, and important practical significance is achieved for improving environment perception and autonomous running capability of an intelligent body under a complex illumination condition. At present, in order to solve the problem of lowered reliability of visual mapping in low-light environment, the prior art mainly focuses on adopting an image enhancement algorithm as a visual front-end preprocessing link, wherein one common method is to use a limited contrast self-adaptive histogram equalization or an improved form thereof, divide the image into blocks and execute equalization operation in each block, and meanwhile restrict the upper limit of contrast improvement so as to balance between local details of the enhanced image and noise suppression, thereby improving the extractability of feature points, and the other method is to attempt to implement end-to-end enhancement of the low-light image or directly extract feature expression insensitive to illumination change by means of a deep learning model so as to improve the performance of the system under poor illumination. However, these existing methods still have obvious limitations, firstly, they generally rely on preset and globally unified enhancement parameters, or are not dynamically adjusted according to local illumination and noise characteristics of each region although being segmented, so that the self-adaptive capacity of different regions is not available, the balanced and effective enhancement is difficult to realize in the whole image, the characteristics of partial regions still cannot be extracted due to insufficient enhancement, and a great deal of noise is introduced due to excessive enhancement, and the fundamental disadvantage is that the prior art breaks apart the image enhancement and the subsequent visual image construction task to form an open loop system, the enhancement module only takes the quality of an intermediate image or the quantity of characteristic points as an optimization target, but does not consider the influence of the output of the enhancement module on the final image construction precision and stability of the SLAM system, for example, although some enhancement operations improve the quality of the subjective image or the quantity of the characteristic points, the feature matching success rate and the geometric consistency are possibly reduced at the same time, so that the map quality is damaged, and the prior art cannot continuously feed back and optimize the front end enhancement strategy in real time according to the actual matching performance and map consistency index in the image construction process, so that the reliable image construction capability is difficult to be provided under a complex low light environment. The above information disclosed in the background section is only for enhancement of understanding of the background of the disclosure and therefore it may include information that does not form the prior art that is already known to a person of ordinary skill in the art. Disclosure of Invention The present invention is directed to a visual mapping method in low-light environment, so as to solve the problems set forth in the background art. In order to achieve the above purpose, the present invention provides the following technical solutions: a visual image construction method in low light environment comprises the following specific steps: step 1, separating a brightness channel from a current input image in an HSV color space, dividing the brightness channel into a plurality of tiles, calculating local statistics of the tiles, and generating contrast limiting parameters of each tile based on the local statistics; Step 2, applying a CLAHE algorithm based on contrast limiting parameters to obtain an enhanced brightness channel, combining the enhanced brightness channel with an original chromaticity channel to reconstruct an enhanced image, screening candida