Search

CN-115587955-B - Image fusion method and device, storage medium and electronic device

CN115587955BCN 115587955 BCN115587955 BCN 115587955BCN-115587955-B

Abstract

The application discloses an image fusion method, an image fusion device, a storage medium and an electronic device, wherein the method comprises the steps of respectively carrying out image transformation in multiple directions on at least one image to be fused to obtain a low-frequency image group and a high-frequency image group corresponding to the at least one image, wherein the low-frequency image group comprises low-frequency images respectively corresponding to the images, the high-frequency image group comprises high-frequency images respectively corresponding to the images, respectively carrying out blocking processing on the images in the high-frequency image group according to preset sizes to obtain multiple image block pairs corresponding to the high-frequency image group, respectively carrying out image block fusion on the image blocks in each image block pair in the multiple image block pairs to obtain multiple fused image blocks, carrying out image block splicing on the multiple fused image blocks to obtain a high-frequency fused image, carrying out image fusion on the images in the low-frequency image group to obtain a low-frequency fused image, and carrying out image inverse transformation on the high-frequency fused image and the low-frequency fused image to obtain the fused image of the at least one image.

Inventors

  • YU YAN
  • YIN JUN

Assignees

  • 浙江大华技术股份有限公司

Dates

Publication Date
20260505
Application Date
20221028

Claims (9)

  1. 1. An image fusion method, comprising: Respectively carrying out image transformation in multiple directions on at least one image to be fused to obtain a low-frequency image group and a high-frequency image group corresponding to the at least one image, wherein the low-frequency image group comprises low-frequency images respectively corresponding to the images, and the high-frequency image group comprises high-frequency images respectively corresponding to the images; respectively carrying out blocking treatment on images in the high-frequency image group according to a preset size to obtain a plurality of image block pairs corresponding to the high-frequency image group; Respectively carrying out image block fusion on the image blocks in each of the plurality of image block pairs to obtain a plurality of fusion image blocks, and carrying out image block splicing on the plurality of fusion image blocks to obtain a high-frequency fusion image; Performing image fusion on the images in the low-frequency image group to obtain a low-frequency fusion image; Performing image inverse transformation on the high-frequency fusion image and the low-frequency fusion image to obtain a fusion image of the at least one image; the method comprises the steps of carrying out image block fusion on image blocks in each image block pair of a plurality of image block pairs to obtain a plurality of fusion image blocks, respectively multiplying a measurement matrix with the image blocks in each image block pair to obtain a plurality of observation image block pairs, wherein the number of columns of the measurement matrix is the same as the number of rows of the image blocks in each image block pair, respectively carrying out fusion on pixel points of each observation image block pair of the plurality of observation image block pairs based on the matching degree of the regional energy of the pixel points to obtain a plurality of fusion measurement image blocks, and carrying out reconstruction on each fusion measurement image block in the plurality of fusion measurement image blocks by using a perception matrix to obtain the plurality of fusion image blocks, wherein the perception matrix is obtained by multiplying the measurement matrix and a redundant dictionary matched with the measurement matrix.
  2. 2. The method according to claim 1, wherein the performing image transformation in multiple directions on the at least one image to be fused to obtain a low-frequency image group and a high-frequency image group corresponding to the at least one image includes: and respectively carrying out one layer of non-downsampled contourlet transformation in four directions on the at least one image to be fused to obtain the low-frequency image group and the high-frequency image group.
  3. 3. The method of claim 1, wherein the fusing pixels of each of the plurality of pairs of observation image blocks based on a degree of matching between the regional energies of the pixels, respectively, to obtain a plurality of fused measurement image blocks, comprises: And respectively taking each observed image block pair as a current observed image block pair to obtain a plurality of fusion measurement image blocks: Respectively executing the following fusion operation by taking each pixel point in the current observation image block pair as a current pixel point: determining the matching degree of the current observation image block to the regional energy of the current pixel point to obtain the current matching degree; Under the condition that the current matching degree is larger than or equal to a matching degree threshold value, carrying out weighted summation on the pixel value of the current pixel point of the current observed image block pair to obtain the pixel value of the current pixel point in the fusion measurement image block corresponding to the current observed image block pair; And under the condition that the current matching degree is smaller than a matching degree threshold value, determining the pixel value of a target observed image block at the current pixel point as the pixel value of the current pixel point in the fusion measurement image block corresponding to the current observed image block pair, wherein the target observed image block is the observed image block with the largest regional energy of the current pixel point in the current observed image block pair.
  4. 4. A method according to claim 3, wherein prior to said weighting and summing the pixel values at the current pixel point for the current observed image block, the method further comprises: and determining the product of the ratio of the first difference value to the second difference value and a preset coefficient as a first weight, determining the difference value of 1 and the first weight as a second weight, wherein the first difference value is the difference value between the current matching degree and the matching degree threshold, the second difference value is the difference value between 1 and the matching degree threshold, the first weight is a weighting coefficient corresponding to an observed image block with the largest pixel value of the current pixel point corresponding to the current observed image block pair, and the second weight is a weighting coefficient corresponding to an observed image block with the smallest pixel value of the current pixel point corresponding to the current observed image block pair.
  5. 5. The method of claim 1, wherein reconstructing each of the plurality of fused measurement image blocks using a perception matrix to obtain the plurality of fused image blocks, comprises: the following reconstruction operations are respectively executed on each column in each fusion measurement image block by using the perception matrix, so as to obtain a plurality of fusion image blocks, wherein in the process of executing the following reconstruction operations, each column is a current column: constructing sparse coefficients matched with the current column by using the perception matrix; And multiplying the sparse coefficient by the redundant dictionary to obtain the current column in the fused image block corresponding to each fused measurement image block.
  6. 6. The method according to any one of claims 1 to 5, wherein the performing image fusion on the images in the low-frequency image group to obtain a low-frequency fused image includes: The following fusion operation is respectively carried out by taking each pixel point of the image in the low-frequency image group as a current pixel point, so as to obtain the pixel value of the current pixel point of the low-frequency fusion image: Respectively determining pixel value variances of a group of pixel points in a preset field taking the current pixel point as a center in each image of the low-frequency image group to obtain field variances corresponding to each image; And determining the maximum domain variance in the domain variances corresponding to the images as the pixel value of the current pixel point in the low-frequency fusion image.
  7. 7. An image fusion apparatus, comprising: The image fusion device comprises a transformation unit, a fusion unit and a display unit, wherein the transformation unit is used for respectively carrying out image transformation on at least one image to be fused in a plurality of directions to obtain a low-frequency image group and a high-frequency image group corresponding to the at least one image, the low-frequency image group comprises low-frequency images respectively corresponding to the images, and the high-frequency image group comprises high-frequency images respectively corresponding to the images; the processing unit is used for respectively carrying out blocking processing on the images in the high-frequency image group according to a preset size to obtain a plurality of image block pairs corresponding to the high-frequency image group; the execution unit is used for respectively carrying out image block fusion on the image blocks in each of the plurality of image block pairs to obtain a plurality of fusion image blocks, and carrying out image block splicing on the plurality of fusion image blocks to obtain a high-frequency fusion image; the fusion unit is used for carrying out image fusion on the images in the low-frequency image group to obtain a low-frequency fusion image; An inverse transformation unit, configured to perform inverse transformation on the high-frequency fusion image and the low-frequency fusion image, to obtain a fusion image of the at least one image; The execution unit comprises a first execution module, a fusion module and a reconstruction module, wherein the first execution module is used for multiplying a measurement matrix with image blocks in each image block pair to obtain a plurality of observation image block pairs, the column number of the measurement matrix is the same as the line number of the image blocks in each image block pair, the fusion module is used for fusing pixel points of each observation image block pair in the plurality of observation image block pairs based on the matching degree between the regional energy of the pixel points to obtain a plurality of fusion measurement image blocks, and the reconstruction module is used for reconstructing each fusion measurement image block in the plurality of fusion measurement image blocks by using a perception matrix to obtain the plurality of fusion image blocks, wherein the perception matrix is obtained by multiplying the measurement matrix and a redundant dictionary matched with the measurement matrix.
  8. 8. A computer-readable storage medium, characterized in that the computer-readable storage medium comprises a stored program, wherein the program when run performs the method of any one of claims 1 to 6.
  9. 9. An electronic device comprising a memory and a processor, characterized in that the memory has stored therein a computer program, the processor being arranged to execute the method according to any of claims 1 to 6 by means of the computer program.

Description

Image fusion method and device, storage medium and electronic device Technical Field The present application relates to the field of computers, and in particular, to an image fusion method and apparatus, a storage medium, and an electronic apparatus. Background In the related art, multi-sensor image (e.g., visible light image, infrared image, etc.) fusion can be widely applied in aspects of target recognition, machine vision, remote sensing, medical image processing, etc. In order to solve the problems that a large amount of data needs to be processed and the efficiency is low in a multi-scale transformation fusion method, multi-sensor image fusion can be performed in a compressed sensing mode, and compression can be performed at the same time of sampling based on the compressed sensing method, so that the data size is reduced, and the data processing efficiency is improved. For example, a single-layer wavelet decomposition can be performed on a visible light image and an infrared image, a low-frequency subband coefficient (subband coefficient is an image) after the wavelet decomposition is fused to obtain a fused low-frequency subband coefficient, a global weighted fusion is performed on a high-frequency subband coefficient to obtain a fused high-frequency subband coefficient, and a wavelet inverse transformation is performed on the fused low-frequency subband coefficient and the high-frequency subband coefficient to obtain a fused image. However, the overall weighted fusion is performed on the high-frequency subband coefficients, so that the fusion calculation is high in complexity and large in calculation amount, and the image fusion efficiency is low. Therefore, the image fusion method in the related art has the problem of low image fusion efficiency caused by high fusion calculation complexity. Disclosure of Invention The embodiment of the application provides an image fusion method and device, a storage medium and an electronic device, which at least solve the problem that the image fusion efficiency is low due to high fusion calculation complexity in the image fusion method in the related technology. According to one aspect of the embodiment of the application, an image fusion method is provided, which comprises the steps of respectively carrying out image transformation in multiple directions on at least one image to be fused to obtain a low-frequency image group and a high-frequency image group corresponding to the at least one image, wherein the low-frequency image group comprises low-frequency images respectively corresponding to the images, the high-frequency image group comprises high-frequency images respectively corresponding to the images, carrying out blocking processing on the images in the high-frequency image group according to preset sizes to obtain multiple image block pairs corresponding to the high-frequency image group, carrying out image block fusion on image blocks in each image block pair in the multiple image block pairs to obtain multiple fused image blocks, carrying out image block splicing on the multiple fused image blocks to obtain a high-frequency fused image, carrying out image fusion on the images in the low-frequency image group to obtain a low-frequency fused image, and carrying out inverse transformation on the high-frequency fused image and the low-frequency fused image to obtain the fused image of the at least one image. According to another aspect of the embodiment of the application, an image fusion device is provided, which comprises a transformation unit, a processing unit and an executing unit, wherein the transformation unit is used for respectively carrying out image transformation in multiple directions on at least one image to be fused to obtain a low-frequency image group and a high-frequency image group corresponding to the at least one image, the low-frequency image group comprises low-frequency images respectively corresponding to the images, the high-frequency image group comprises high-frequency images respectively corresponding to the images, the processing unit is used for respectively carrying out blocking processing on the images in the high-frequency image group according to preset sizes to obtain multiple image block pairs corresponding to the high-frequency image group, the executing unit is used for respectively carrying out image block fusion on the image blocks in each image block pair in the multiple image block pairs to obtain multiple fused image blocks and carrying out image block splicing on the multiple fused image blocks to obtain a high-frequency fused image, the fusion unit is used for carrying out image fusion on the images in the low-frequency image group to obtain a low-frequency fused image, and the inverse transformation unit is used for carrying out inverse transformation on the high-frequency fused image and the low-frequency fused image. In an exemplary embodiment, the transformation unit comprises a transformation module, c