Search

US-20260127716-A1 - LOW-LIGHT IMAGE ENHANCEMENT AND IMAGE RECTIFICATION SYSTEMS AND METHODS

US20260127716A1US 20260127716 A1US20260127716 A1US 20260127716A1US-20260127716-A1

Abstract

Techniques for image enhancement are disclosed. A method includes receiving an image pair of a scene, the image pair comprising a visible light (VIS) image of the scene captured using a VIS imaging device and an infrared (IR) image of the scene captured using an IR imaging device. The method may further include generating a combined image based on the image pair, wherein the combined image comprises one or more quality characteristics. The method may also include adjusting one or more components of at least a portion of the VIS image based on the one or more quality characteristics of the combined image. The method may also include generating an enhanced combined image based on at least the adjusted VIS image. Additional methods and systems are also provided.

Inventors

  • Martin Solli

Assignees

  • FLIR SYSTEMS AB

Dates

Publication Date
20260507
Application Date
20251230

Claims (20)

  1. 1 . A method comprising: receiving an image pair of a scene, the image pair comprising a visible light (VIS) image of the scene captured using a VIS imaging device and an infrared (IR) image of the scene captured using an IR imaging device; generating a combined image based on the image pair, wherein the combined image comprises one or more quality characteristics; adjusting one or more components of at least a portion of the VIS image based on the one or more quality characteristics of the combined image; and generating an enhanced combined image based on at least the adjusted VIS image.
  2. 2 . The method of claim 1 , wherein the one or more quality characteristics of the combined image comprise a luminance characteristic and the one or more components of the portion of the VIS image comprise a contrast; and the method further comprising: comparing the luminance characteristic of the combined image to a luminance threshold; determining, if the luminance characteristic is outside of the luminance threshold, a luminance deviation element based on the comparing; and wherein the adjusting the one or more components includes increasing the contrast of the portion of the VIS image based on the luminance deviation element.
  3. 3 . The method of claim 2 , wherein the luminance characteristic comprises a plurality of intensity values, wherein each intensity value of the plurality of intensity values is associated with a corresponding pixel of the combined image.
  4. 4 . The method of claim 2 , wherein the adjusting the one or more components by increasing the contrast comprises: providing a training dataset comprising low-light image inputs correlated to contrast enhancement image outputs; training a contrast enhancement convolutional neural network (CNN) using the training dataset; and increasing, using the contrast enhancement CNN, the contrast of the at least the portion of the VIS image.
  5. 5 . The method of claim 2 , wherein the one or more quality characteristics of the combined image further comprise a noise characteristic of the combined image; and the method further comprising: comparing the noise characteristic of the combined image to a noise threshold; determining, if the noise characteristic is outside of the noise threshold, a noise deviation element; and wherein the adjusting the one or more components comprises reducing a noise level of the at least a portion of the VIS image based on the noise deviation element.
  6. 6 . The method of claim 1 , wherein the generating the enhanced combined image comprises: extracting high spatial frequency content from the adjusted VIS image, wherein the high spatial frequency content is associated with contours and/or edges within the VIS image; and combining the extracted high spatial frequency content from the VIS image with a corresponding portion of the IR image to obtain the enhanced combined image.
  7. 7 . The method of claim 1 , further comprising selecting, by a user input on a user interface or a feature extraction CNN, the at least the portion of the VIS image to be adjusted for contrast; and wherein the generating the combined image comprises deriving color characteristics of the scene from the VIS image and the IR image.
  8. 8 . The method of claim 1 , further comprising: identifying a first feature in the VIS image and a second feature in the IR image; calculating a spatial deviation based on the first feature and the second feature; generating, if the spatial deviation exceeds a predetermined threshold, rectification parameters based at least on the spatial deviation; and adjusting the combined image based on at least the rectification parameters.
  9. 9 . The method of claim 8 , wherein the calculating the spatial deviation comprises comparing a position of the first feature to a position of the second feature, wherein the spatial deviation comprises a horizontal translation and/or a vertical translation.
  10. 10 . The method of claim 8 , wherein the generating the combined image is further based on alignment parameters; and wherein the adjusting the combined image comprises: altering the alignment parameters based on the rectification parameters; and updating the combined image based on the altered alignment parameters.
  11. 11 . A system comprising: a logic device configured to: receive an image pair of a scene, the image pair comprising a visible light (VIS) image of the scene captured using a VIS imaging device and an infrared (IR) image of the scene captured using an IR imaging device; generate a combined image based on the image pair, wherein the combined image comprises one or more quality characteristics; adjust one or more components of at least a portion of the VIS image based on the one or more quality characteristics of the combined image; and generate an enhanced combined image based on at least the adjusted VIS image.
  12. 12 . The system of claim 11 , wherein the one or more quality characteristics of the combined image comprise a luminance characteristic and the one or more components of the portion of the VIS image comprise a contrast; and wherein the logic device is further configured to: compare the luminance characteristic of the combined image to a luminance threshold; determine, if the luminance characteristic is outside of the luminance threshold, a luminance deviation element based on the comparison; and wherein the logic device is configured to adjust the one or more components by increasing the contrast of the portion of the VIS image based on the luminance deviation element.
  13. 13 . The system of claim 12 , wherein the luminance characteristic comprises a plurality of intensity values, wherein each intensity value of the plurality of intensity values is associated with a corresponding pixel of the combined image.
  14. 14 . The system of claim 12 , wherein the adjusting the one or more components by the increasing the contrast comprises: providing a training dataset comprising low-light image inputs correlated to contrast enhancement image outputs; and training a contrast enhancement convolutional neural network (CNN) using the training dataset; and increasing, using the contrast enhancement CNN, the contrast of the at least the portion of the VIS image.
  15. 15 . The system of claim 12 , wherein: the one or more quality characteristics of the combined image further comprise a noise characteristic of the combined image; and the logic device is further configured to: compare the noise characteristic of the combined image to a noise threshold; determine, if the noise characteristic is outside of the noise threshold, a noise deviation element; and wherein the adjusting the one or more components comprises reducing a noise level of at least a portion of the VIS image based on the noise deviation element.
  16. 16 . The system of claim 11 , wherein the logic device is further configured to generate the enhanced combined image by: extracting high spatial frequency content from the adjusted VIS image, wherein the high spatial frequency content is associated with contours and/or edges within the VIS image; and combining the extracted high spatial frequency content from the VIS image with a corresponding portion of the IR image to obtain the enhanced combined image.
  17. 17 . The system of claim 11 , wherein the logic device is further configured to, in response to a user input on a user interface or selection by a feature extraction CNN, the at least the portion of the VIS image to be adjusted for contrast; and wherein the logic device is configured to generate the combined image by deriving color characteristics of the scene from the VIS image and the IR image.
  18. 18 . The system of claim 11 , wherein the logic device is further configured to: identify a first feature in the VIS image and a second feature in the IR image; calculate a spatial deviation based on the first feature and the second feature; generate, if the spatial deviation exceeds a predetermined threshold, rectification parameters based at least on the spatial deviation; and adjust the combined image based on at least the rectification parameters.
  19. 19 . The system of claim 18 , wherein the logic device is further configured to calculate the spatial deviation by comparing a position of the first feature to a position of the second feature, wherein the spatial deviation comprises a horizontal translation and/or a vertical translation.
  20. 20 . The system of claim 18 , wherein the logic device is further configured to generate the combined image based on alignment parameters; and wherein the logic device is further configured to adjust the combined image by: altering the alignment parameters based on the rectification parameters; and updating the combined image based on the altered alignment parameters.

Description

CROSS REFERENCE TO RELATED APPLICATIONS This application is a continuation-in-part of International Application No. PCT/US2025/046807 filed Sep. 17, 2025 and entitled “IMAGE RECTIFICATION SYSTEMS AND METHODS,” which claims priority to and the benefit of U.S. Provisional Patent Application No. 63/698,542 filed Sep. 24, 2024 and entitled “IMAGE RECTIFICATION SYSTEMS AND METHODS,” all of which is incorporated herein by reference in its entirety. This application also claims priority to and the benefit of U.S. Provisional Patent Application No. 63/740,629 filed Dec. 31, 2024 and entitled “LOW-LIGHT IMAGE ENHANCEMENT SYSTEMS AND METHODS,” which is incorporated herein by reference in its entirety. TECHNICAL FIELD The present invention relates generally to imaging systems and, more particularly, to low-light enhancement and image rectification systems and methods. BACKGROUND Visible spectrum cameras are used in a variety of imaging applications to capture color or monochrome images derived from visible light. Visible spectrum cameras are often used for daytime or other applications when there is sufficient ambient light or when image details are not obscured by smoke, fog, or other environmental conditions detrimentally affecting the visible spectrum. Infrared cameras are used in a variety of imaging applications to capture infrared (e.g., thermal) emissions from objects as infrared images. Thermal, or infrared (IR), images of scenes are often useful for monitoring, inspection and/or maintenance purposes, and the like. Infrared cameras may be used for nighttime or other applications when ambient lighting is poor or when environmental conditions are otherwise non-conducive to visible spectrum imaging. Infrared cameras may also be used for applications in which additional non-visible-spectrum information about a scene is desired. Imaging systems exist that use two or more separate imagers to capture two or more separate images or video streams of a target object or scene, which can be used to create a fusion image. For example, a multimodal imaging system (also referred to as a multispectral imaging system) that comprises at least two imaging modules configured to capture images in different spectra (e.g., visible light, infrared light, ultraviolet, and so on) is useful for analysis, inspection, or monitoring purposes, since a same object or scene can be captured in images of different spectra that can compared, combined, or otherwise processed for a better understanding of the target object or scene. However, fusion images can often be difficult to interpret due to, for example, a lack of light, which may result in reduced resolution, lack of contrast between objects, and/or excess noise. SUMMARY Techniques are disclosed for systems and methods for generating an enhanced fusion images based on lighting conditions and/or ambient light availability. A method is provided for generating an enhanced combined image in low lighting. The method includes receiving an image pair of a scene, the image pair comprising a visible light (VIS) image of the scene captured using a VIS imaging device and an infrared (IR) image of the scene captured using an IR imaging device; generating a combined image having one or more quality characteristics based on the image pair; adjusting a contrast of at least a portion of the VIS image based on the combined image; and generating an enhanced combined image based on the adjusted VIS image and the IR image. In one or more embodiments, a method of low-light imaging enhancement is provided. The method includes receiving an image pair of a scene, the image pair comprising a visible light (VIS) image of the scene captured using a VIS imaging device and an infrared (IR) image of the scene captured using an IR imaging device; generating a combined image based on the image pair, wherein the combined image comprises one or more quality characteristics; adjusting one or more components of at least a portion of the VIS image based on the one or more quality characteristics of the combined image; and generating an enhanced combined image based on at least the adjusted VIS image. In one or more embodiments, a system with low-light imaging enhancement is provided. The system includes a logic device configure to: receive an image pair of a scene, the image pair comprising a visible light (VIS) image of the scene captured using a VIS imaging device and an infrared (IR) image of the scene captured using an IR imaging device; generate a combined image based on the image pair, wherein the combined image comprises one or more quality characteristics; adjust one or more components of at least a portion of the VIS image based on the one or more quality characteristics of the combined image; and generate an enhanced combined image based on at least the adjusted VIS image. The scope of the invention is defined by the claims, which are incorporated into this section by reference. A more complete understanding of embo