Search

CN-122023475-A - Real-time registration and fusion method of visible light and infrared image based on SOC hardware

CN122023475ACN 122023475 ACN122023475 ACN 122023475ACN-122023475-A

Abstract

The invention belongs to the technical field of fusion of infrared images and visible light images, and discloses a real-time registration and fusion method of visible light and infrared images based on SOC hardware. The method realizes the registration of the low-resolution infrared image and the high-resolution visible light image, after the image registration, the decomposition and the reconstruction of wavelet transformation are carried out on the basis of the color space of the visible light image and the gray infrared image in the multi-scale direction, finally the fusion of the infrared image and the visible light is finished, after the algorithm is designed, the programming is finished by CUDA C++ language, and finally the algorithm achieves real-time fusion of the binocular camera with parallax on a hardware platform.

Inventors

  • GAO WEI
  • WEI YOUNING
  • ZHANG YU
  • YANG PENG
  • MAO WEI
  • Li Dieguo

Assignees

  • 西南技术物理研究所

Dates

Publication Date
20260512
Application Date
20251221

Claims (10)

  1. 1. The real-time registration and fusion method of the visible light and infrared images based on the SOC hardware is characterized by comprising the following steps of: s1, respectively shooting an infrared image and a visible light image of a special calibration plate within a set error range, and respectively finishing corner detection, wherein the position relation between a registration error and the calibration plate is as follows: Wherein, the Is the focal length of the camera and, Is the baseline distance of the infrared camera from the visible camera, Is the size of the pixel and is, Is the distance from the target object to the camera when the ideal error is zero, Is the distance from the calibration plate to the camera; s2, monocular calibration of the infrared camera and the visible light camera is completed by utilizing the world coordinate system corresponding to the corner points of the calibration plate image and the coordinates under the image coordinate system, and internal parameters of the infrared lens are calculated And external parameters 、 As well as internal parameters of a visible light lens And external parameters 、 And respective focal lengths , ; S3, calculating by using calibrated parameters of the infrared camera and the visible camera and visible light and infrared angular points respectively, eliminating error influence, restoring angular point coordinates, detecting common angular points, amplifying an infrared image, and calculating an optimal homography matrix H for converting the infrared image into the visible light image by using the detected common angular points and combining with a RANSAC algorithm; s4 utilizing homography matrix Taking a visible light image as a reference image, performing perspective transformation on the infrared image, and completing registration; s5, obtaining two infrared images and visible light images with the same resolution after registration is completed, and performing pyramid decomposition on the visible light images and the infrared images after graying; s6, converting the visible light image into Three channels, then separate Respectively making a layer of channel and the infrared image after graying Wavelet decomposition to obtain visible light Low-frequency information and high-frequency information of the channel, and low-frequency information and high-frequency information of the infrared image after graying; S7, visible light The low frequency information of the channel and the low frequency information of the infrared image after graying are fused, and the visible light is obtained The high-frequency information of the channel and the high-frequency information of the infrared image after graying are fused; S8, performing wavelet inverse transformation and normalization, and performing wavelet inverse transformation on the obtained product Channels merge and convert to Obtaining the fused product The image is displayed in a form of a picture, S9, the pyramids of each layer are processed by the same operation, and finally weighted and fused, S10, enhancing the fused image by utilizing gamma correction.
  2. 2. The real-time registration and fusion method of visible light and infrared images based on SOC hardware according to claim 1, wherein in step S1, a picture is shot on a special calibration plate to complete corner detection, the infrared calibration plate is made of a material which is obvious in temperature perception, the infrared image and the visible light image can be shot at the same time, the infrared image and the visible light image are shot firstly, then binarization processing is carried out, and finally corner detection is carried out.
  3. 3. The real-time registration and fusion method of visible light and infrared images based on SOC hardware as set forth in claim 2, wherein the principle of calculating internal parameters and external parameters of the infrared camera and the visible light camera by using the corner detection result in step S2 and step S3 is the same.
  4. 4. The real-time registration and fusion method of visible light and infrared images based on SOC hardware as set forth in claim 3, wherein in the steps S2 and S3, the process of calculating the internal parameters and external parameters of the infrared camera and the visible light camera is as follows: S2-1 setting world point coordinates on the checkerboard of the calibration plate Checkerboard plane Points projected onto the image using the principle of aperture imaging And obtaining an imaging mathematical model by using homogeneous coordinates of the two: (1) Wherein, the Is a non-zero scale factor and, 、 Is in the image coordinate system Shaft and method for producing the same The scale factor of the axis is set, =0, Is Shaft and method for producing the same The non-perpendicular factor of the axis, Is an internal reference matrix, and the reference matrix is a reference matrix, Is an extrinsic matrix; s2-2 in step S2-1, formula (1) is written as: (2) Wherein, the To obtain the internal and external reference matrices, first obtain And Homography matrix between The above formula (2) is written as: (3) Eliminating scale factors Obtaining: , (4) the matrix has eight degrees of freedom and is solved by using at least 4 pairs of corner points; S2-3, obtaining an internal reference matrix of the camera; To be calculated Substitution into Due to the rotation matrix Is a unitary matrix of which the number of symbols is equal to the number of symbols, And (3) with Unit orthogonality, yields the following formula: (5) (6) From the following components Obtaining: (7) substituting (7) into (5) (6) to obtain: (8) (9) Is provided with Wherein Is an internal reference matrix; Is provided with Will be Substituting formula (8) (9) to obtain: (10) (11) (12) (13) (14) s2-4, utilizing H and internal parameters to calculate external parameters; From the following components Obtaining: External parameter matrix The complete external parameter matrix is 。
  5. 5. The method for real-time registration and fusion of visible and infrared images based on SOC hardware according to claim 4, wherein in step S3, the errors in obtaining the internal and external reference matrices are eliminated, iterative optimization is performed by using maximum likelihood, n images including checkerboard are assumed to be acquired for calibration, m corner points of the checkerboard are included in each image, and the corner points on the ith image are made to be The projection points under the camera matrix obtained by calculation are as follows: Wherein the method comprises the steps of And Is the rotation vector and translation vector corresponding to the ith image, Is an inner parameter matrix, then corner points The probability density function of (2) is: Constructing a likelihood function: Let the Maximum value is obtained, namely the following equation is minimized, and the problem of optimizing by using a multi-parameter nonlinear system is solved Carrying out iterative solution on the algorithm; 。
  6. 6. the method for real-time registration and fusion of visible light and infrared images based on SOC hardware according to claim 5, wherein in step S4, registration of infrared images is completed by utilizing a homography matrix, and the process is as follows: First, use is made of Constructing a remapping table to locate the coordinates of the low-resolution infrared image Is remapped to the target image coordinate position And finally, finishing the registration of the infrared image with low resolution and the visible light image with high resolution.
  7. 7. The method for real-time registration and fusion of visible and infrared images based on SOC hardware as set forth in claim 6, wherein in step S6, the visible image is converted into Three-channel by construction of Lookup table implementation, construction The formula of the lookup table is as follows: constructing a lookup table once, and directly looking up a table according to pixel values to obtain corresponding visible light images of continuous frames when the visible light images are input The value of the channel; The formula for graying the infrared image is as follows: 。
  8. 8. The method for real-time registration and fusion of visible and infrared images based on SOC hardware as claimed in claim 7, wherein in step S6, The wavelet decomposition process is as follows: firstly, respectively performing high-pass filtering and low-pass filtering along the column direction of an input image to obtain a low-frequency component and a high-frequency component of an original image in the vertical direction, and then respectively performing low-pass filtering and high-pass filtering on the high-frequency component along the horizontal direction to obtain a diagonal high-frequency component And a vertical high frequency component Respectively carrying out high-pass filtering and low-pass filtering on the low-frequency component along the horizontal direction to obtain a horizontal high-frequency component And low frequency component 。
  9. 9. The method for real-time registration and fusion of visible light and infrared images based on SOC hardware according to claim 8, wherein in step S8, the wavelet inverse transformation process is to perform convolution operation on the high frequency component and the low frequency component by using reconstruction low-pass filtering and reconstruction high-pass filtering respectively, and then perform interpolation operation on the convolution result to restore the image.
  10. 10. The method for real-time registration and fusion of visible and infrared images based on SOC hardware as set forth in claim 9, wherein the gamma enhancement in step S11 belongs to power enhancement in image enhancement, and the formula of power transformation is as follows: Wherein, the And Is a positive constant which is a function of the current, The grey level is increased, the image is lightened, The grey level is reduced and the image darkens.

Description

Real-time registration and fusion method of visible light and infrared image based on SOC hardware Technical Field The invention belongs to the technical field of fusion of infrared images and visible light images, and relates to a real-time registration and fusion method of visible light and infrared images based on SOC hardware. Background The working wavelength range of the visible light sensor is 380 nm-780 nm, and the band is the same as the human visual perception band. The visible light image is formed by utilizing reflected light of a shot object, the resolution of the image is high, and the information contained in the image is rich and clear. The visible light image has a disadvantage in that it is difficult to capture the object information when the environment in which the photographed object is located is in a low light environment, such as at night, or the object is blocked or the object is in a dense fog. The working wavelength of the general infrared sensor is 8-14 um, and the infrared image is obtained by converting the heat radiation of the target object into a gray value. The image information is greatly correlated with the temperature of the target object, and the heat source target has a higher pixel value in the infrared image and is displayed with high brightness. The system can overcome obstacles to obtain information of a target object and can work in a low-light environment. The infrared image has the defects of serious loss of appearance characteristics of a target object in the image, blurred texture information and high noise. Because the imaging modes of the two sensors are different and the application scenes are also different, the limitation of using a single sensor system can be overcome by fusing the images obtained by the two sensors, complementary information can be obtained, meanwhile, high-quality images can be obtained, the understandability of the images is improved, and effective information is provided for further environment perception and decision making. The infrared image and the visible light image have good complementarity, and a more robust environment sensing system can be realized through effective sensor integration. This also results in registration and fusion of infrared and visible images has been a hotspot problem in academia and industry research. The registering of the infrared image and the visible light image is usually used as a preprocessing step for fusing the infrared image and the visible light image, and the registering performance can have a great influence on the final fused effect. The general flow of infrared image and visible light image fusion is shown in fig. 1, two images shot in the same scene are firstly subjected to image denoising, then registration pretreatment is carried out, and finally the registered infrared image and visible light image are fused by a fusion algorithm. In a binocular stereoscopic imaging system, the infrared lens and the visible lens cannot be in the same physical position when shooting the same time and the same region, which can lead to geometric distortion of the final imaging of the infrared lens and the visible lens, and difficulty is brought to model parameter estimation during alignment. Registration techniques used in the various fields are typically developed for different application environments and can be broadly divided into two categories, region (pixel) based registration techniques and feature based registration techniques. The region-based registration technique is to perform registration by using gray information of an image, and has an advantage of simple realization. The method has the defects that gray information of the whole image is needed to participate in calculation, the calculated amount is large, and the complexity is high. There are a cross-correlation method, a mutual information method, a transform domain method, and the like. The characteristic-based registration method extracts common characteristics from infrared and visible light images respectively, wherein the extracted common characteristics mainly comprise point characteristics, line characteristics and area characteristics, and then model parameters are searched in a characteristic space. The method has small operand and good adaptability. Feature-based methods can generally be divided into five steps (1) extracting features, often represented by multidimensional vectors. (2) describing the features, using descriptors to express the image information. (3) Matching the features, and establishing a corresponding relation by using the features with higher similarity. (4) And estimating parameters, namely estimating the spatial transformation parameters by utilizing the established matching relation. (5) And converting the image, converting the image by using the obtained spatial conversion parameters, and finishing registration. The above methods are all aimed at the problem of image registration of specific scenes, and