CN-121685285-B - Cross-spectrum fusion and recognition optimization method under condition of low infrared contrast at night
Abstract
The invention relates to the technical field of image fusion, in particular to a cross-spectrum fusion and recognition optimization method under a night low infrared contrast condition. The method is based on the edge intensity of coordinate points on an infrared image, represents motion consistency based on motion information shown on the infrared image, and calculates confidence weights for each coordinate point by combining specific brightness attributes. The method comprises the steps of constructing an initial geometric correction displacement field for an optimal matching point in a visible light image through a reference point in the infrared image based on a matching process, performing filtering operation in the vertical and horizontal directions to obtain a smooth geometric correction displacement field for spatial reconstruction, and realizing screening of characteristic information through weighted fusion of spatial correction thermal imaging images after spatial reconstruction to obtain a fusion image, wherein accurate and effective target identification can be realized based on the fusion image. The invention improves the accuracy of night target identification through effective image fusion.
Inventors
- ZHANG YU
- Xin Jinze
- WANG CHEN
Assignees
- 西安天茂智能科技有限公司
Dates
- Publication Date
- 20260512
- Application Date
- 20260210
Claims (8)
- 1. A method for optimizing cross-spectrum fusion and identification under a low infrared contrast condition at night, the method comprising: For any coordinate point in the target area, according to the edge intensity of the coordinate point on the infrared image, the brightness on the visible light image and the motion information confusion of adjacent frames on the infrared image, the confidence degree weight of each coordinate point is obtained; Searching an optimal matching point in a visible light image by taking a coordinate point in an infrared image as a reference point, wherein the matching process of the optimal matching point needs to compare the motion characteristic difference and the edge information difference of the reference point and the coordinate point in the visible light image reflected in image information, and obtaining a matching cost by combining with the confidence weight of the reference point and performing matching; Constructing an initial geometric correction displacement field according to the coordinate deviation of all the datum points and the corresponding optimal matching points, respectively carrying out horizontal filtering and vertical filtering on the initial geometric correction displacement field to obtain a smooth geometric correction displacement field; Fusing the space correction thermal imaging image and the visible light image based on the confidence weight of the coordinate point to obtain a fused image; The edge intensity is the gradient amplitude of the coordinate point on the infrared image; The method for acquiring the motion information confusion comprises the following steps: and in the infrared velocity field, calculating an average velocity vector in a preset neighborhood range taking the coordinate point as a center, and calculating a square sum according to Euclidean distance between the average velocity vector and the velocity vector of the coordinate point to obtain the motion information confusion.
- 2. The method for cross-spectrum fusion and recognition optimization under the condition of low infrared contrast at night according to claim 1, wherein the obtaining the confidence weight of each coordinate point comprises the following steps: Dividing a visible light image into a high-brightness region and a low-brightness region according to brightness information, setting a strong light interference identification value as 1 if a coordinate point is in the high-brightness region and a gray gradient amplitude in the visible light image is smaller than a preset texture threshold value, and setting the strong light interference identification value as 0 if the coordinate point is in the low-brightness region; and obtaining the confidence weight according to the strong light interference identification value, the edge strength and the motion information confusion.
- 3. The method of claim 2, wherein the obtaining the confidence weight according to the strong light interference identification value, the edge strength, and the motion information confusion comprises: for any one coordinate point, comparing the motion information confusion with the overall motion information confusion of all coordinate points to obtain a confusion significant degree, carrying out negative correlation mapping and normalization on the confusion significant degree to obtain motion stability, taking the result of subtracting the strong light interference identification value from a positive integer 1 as an interference shielding item of the coordinate point, and taking the product of the edge strength, the motion stability and the interference shielding item as the confidence coefficient weight.
- 4. The method for cross-spectrum fusion and recognition optimization under the condition of low infrared contrast at night according to claim 1, wherein the searching for the optimal matching point in the visible light image comprises the following steps: And in the visible light image, a searching area is constructed by taking the coordinates corresponding to the datum point as a center according to a preset size, pixel points in the searching area in the visible light image are points to be matched of the datum point, the matching cost of the datum point and each point to be matched is obtained, and the point to be matched with the minimum matching cost is selected as the optimal matching point.
- 5. The method for cross-spectrum fusion and recognition optimization under the condition of low infrared contrast at night according to claim 4, wherein the method for obtaining the matching cost comprises the following steps: For any point to be matched, obtaining a transverse coordinate difference and a longitudinal coordinate difference between the point to be matched and the reference point; Obtaining an infrared speed field and a visible light speed field according to image information between adjacent frames, obtaining the motion characteristic difference based on the difference between the speed vector of the reference point on the infrared speed field and the speed vector of the point to be matched in the visible light image, and carrying out weighted fusion on the edge information difference and the motion characteristic difference to obtain the characteristic information difference; And taking the confidence coefficient weight as the weight of the characteristic information difference, taking the result of negative correlation mapping of the confidence coefficient weight as the weight of the displacement constraint item, and carrying out weighted summation on the characteristic information difference and the displacement constraint item to obtain the matching cost.
- 6. The method for cross-spectrum fusion and recognition optimization under the condition of low infrared contrast at night according to claim 1, wherein the method for acquiring the space correction thermal imaging map comprises the following steps: For any pixel point on any infrared image, performing translation search based on a displacement characteristic corresponding to a smooth geometric correction displacement field to obtain a source coordinate of the pixel point, and performing interpolation based on a pixel value of the source coordinate in a preset neighborhood range to obtain a replacement pixel value of the pixel point; And replacing the original pixel value by the replacement pixel value for all pixel points on the infrared image to obtain the space correction thermal imaging image.
- 7. The method for cross-spectrum fusion and recognition optimization under the condition of low infrared contrast at night according to claim 1, wherein the method for acquiring the fusion image comprises the following steps: fusing the space correction thermal imaging image and the visible light image according to a preset fusion weight to obtain an initial fusion image; And for each coordinate point, taking the confidence coefficient weight as the weight of the pixel value on the initial fusion image, taking the result of the negative correlation mapping of the confidence coefficient weight as the weight of the pixel value on the space correction thermal imaging image, and obtaining the fusion pixel value of each pixel point on the fusion image through weighted summation to obtain the fusion image.
- 8. The method of claim 1, wherein the object detection network comprises three vision channels and one physical attention channel, the vision channels are filled with the same fusion image, and the physical attention channel is filled with a confidence weight distribution map formed by confidence weights of all coordinate points.
Description
Cross-spectrum fusion and recognition optimization method under condition of low infrared contrast at night Technical Field The invention relates to the technical field of image fusion, in particular to a cross-spectrum fusion and recognition optimization method under a night low infrared contrast condition. Background In application scenes such as night security monitoring, auxiliary driving, search and rescue, the environment illumination condition is generally poor. In order to acquire complete target information, infrared thermal imaging and glistening visible light imaging are often adopted to perform cross-spectrum image fusion. Infrared thermal imaging is based on thermal radiation differential imaging of targets, can penetrate darkness and smog, but is affected by thermal diffusion effect under the condition of long distance or low temperature difference, and the contours of the targets tend to be blurred and spread, and have no texture details. The low-light visible light imaging is based on reflected light imaging, so that scene textures can be captured, but in a low-illumination environment, the image is extremely susceptible to high-frequency photon noise, and highlight halation artifacts can be generated at strong light sources such as car lights and street lamps. Due to the intrinsic differences in imaging mechanisms, the appearance of the same physical object in an infrared image is significantly different from that in a visible image. For example, a thermal target in an infrared image typically appears as a divergent thermal mass, while a corresponding target in a visible image may appear as a noisy spot or be covered by a halo. The phenomenon parallax makes the prior art unable to directly adopt rigid transformation to realize accurate alignment, because global rigid registration cannot correct non-rigid contour deviation caused by thermal diffusion effect, resulting in ghost or ghost of fused image edges, and in the registration process, the prior art is extremely easy to converge to wrong extreme points due to local noise traction in flat areas or high-noise areas lacking clear texture features, resulting in distortion or tearing of images, further resulting in poor quality of fused images, and interfering with subsequent target recognition. Disclosure of Invention In order to solve the technical problem that the infrared image and the visible light image cannot be effectively fused so as to interfere with the target identification effect in the prior art, the invention aims to provide a cross-spectrum fusion and identification optimization method under the condition of low infrared contrast at night, and the adopted technical scheme is as follows: the invention provides a cross-spectrum fusion and identification optimization method under a night low infrared contrast condition, which comprises the following steps: For any coordinate point in the target area, according to the edge intensity of the coordinate point on the infrared image, the brightness on the visible light image and the motion information confusion of adjacent frames on the infrared image, the confidence degree weight of each coordinate point is obtained; Searching an optimal matching point in a visible light image by taking a coordinate point in an infrared image as a reference point, wherein the matching process of the optimal matching point needs to compare the motion characteristic difference and the edge information difference reflected by the coordinate point in image information, and the matching cost is obtained and matched by combining the confidence weight of the reference point; Constructing an initial geometric correction displacement field according to the coordinate deviation of all the datum points and the corresponding optimal matching points, respectively carrying out horizontal filtering and vertical filtering on the initial geometric correction displacement field to obtain a smooth geometric correction displacement field; and fusing the space correction thermal imaging image with the visible light image based on the confidence coefficient weight of the coordinate point to obtain a fused image, and carrying out night target identification by utilizing the fused image and a target detection network obtained by training the confidence coefficient weight. Further, the edge intensity is a gradient magnitude of the coordinate point on the infrared image. Further, the method for acquiring the motion information confusion comprises the following steps: and in the infrared velocity field, calculating an average velocity vector in a preset neighborhood range taking the coordinate point as a center, and calculating a square sum according to Euclidean distance between the average velocity vector and the velocity vector of the coordinate point to obtain the motion information confusion. Further, the obtaining the confidence weight of each coordinate point includes: Dividing a visible light image into a high-brightnes