Search

CN-122023478-A - Image processing method and device, electronic equipment and storage medium

CN122023478ACN 122023478 ACN122023478 ACN 122023478ACN-122023478-A

Abstract

An image processing method, an image processing device, electronic equipment and a storage medium are disclosed, and relate to the technical field of image processing. The method comprises the steps of firstly, carrying out feature matching on an obtained infrared image and a color image by cooperating with an infrared camera and a color camera, then calculating to obtain a depth image, realizing that accurate depth information can be obtained in a complex scene, secondly, generating light rays according to the depth image, sampling color information of a corresponding position from the color image, and establishing a target light field, wherein color consistency is corrected through a mapping relation between infrared intensity and visible light intensity, and finally, obtaining a target color image according to the depth image and the color image based on the target light field. The method solves the problem of poor quality of the pictures obtained in the complex scene, and achieves the technical effects of reducing the cost of the whole car and outputting high-precision color images.

Inventors

  • WU DI

Assignees

  • 奇瑞汽车股份有限公司

Dates

Publication Date
20260512
Application Date
20260106

Claims (10)

  1. 1. An image processing method, applied to a first processing module, the first processing module including an infrared camera and a color camera arranged side by side, the method comprising: Synchronously acquiring an infrared image and a color image; performing feature matching on the infrared image and the color image to obtain target features, wherein the target features are used for indicating the outline and the category of all objects included in the infrared image or the color image; obtaining a depth map according to the target characteristics, the infrared image and the color image, wherein the depth map is used for indicating the outline, the category and the depth information of all objects; On the basis of the depth map, a target light field is established based on a first mapping relation between the depth map and the color image, wherein the first mapping relation comprises depth information of the same object in the depth map and color information in the color image; and obtaining a target color image according to the depth map and the color image based on the target light field.
  2. 2. The method of claim 1, wherein prior to said synchronously acquiring an infrared image and a color image, the method further comprises: Acquiring a plurality of groups of images, wherein each group of images comprises at least one infrared image and at least one color image; And carrying out parameter calibration on the infrared camera and the color camera according to the plurality of groups of images, and determining the baseline length and the pixel focal length between the infrared camera and the color camera.
  3. 3. The method of claim 2, wherein the obtaining a depth map from the target feature, the infrared image, and the color image comprises: Matching all pixel points in the infrared image and the color image to obtain a first parallax; obtaining target depth information of the pixel point according to the first parallax, the baseline length and the pixel focal length; and obtaining a depth map according to the target depth information of the pixel points and the target characteristics.
  4. 4. The method of claim 3, wherein a first network module is disposed in the first processing module, the first network module being configured to instruct predicting depth information of an infrared image and evaluating confidence of the depth information, the obtaining target depth information of the pixel point according to the first parallax, the baseline length, and the pixel focal length, comprising: Multiplying the baseline length by the pixel focal length, and dividing the multiplied baseline length by the first parallax to obtain first depth information; inputting the first depth information into the first network module to obtain a first confidence coefficient of the first depth information; Inputting the infrared image into a first network module to obtain second depth information and second confidence of the second depth information; and comparing the first confidence coefficient with the second confidence coefficient, and determining depth information with high confidence coefficient as target depth information of the pixel point.
  5. 5. The method of claim 1, wherein the method comprises, prior to establishing a target light field based on the first mapping relationship between the depth map and the color image on the basis of the depth map: Determining the color depth of the object according to a second mapping relation between the infrared image and the color image, wherein the second mapping relation comprises the color depth of the same object in the infrared image and the color depth in the color image and the color depth in the visible light intensity; Determining color information of the object according to the color depth of the object and the color categories of all objects included in the color image; and determining a first mapping relation between the depth map and the color image according to the depth map and the color information of the object.
  6. 6. The method of claim 1, wherein the establishing a target light field based on the depth map based on a first mapping relationship between the depth map and the color image, the method comprising: Determining light field information of the object according to the depth map, wherein the light field information comprises the position and the direction of light rays of the object; and establishing the target light field based on a first mapping relation between the depth map and the color image according to the light field information.
  7. 7. The method of claim 6, wherein the determining the light field information of the object from the depth map comprises: acquiring a plurality of viewpoints in a virtual viewpoint plane of the color camera; Determining the light ray of each view point according to the plurality of view points and the depth map, wherein the light ray comprises the position and the direction of the light ray of each view point; and determining the light field information of the object according to the light rays of each viewpoint.
  8. 8. An image processing apparatus, characterized in that it is applied to a first processing module including an infrared camera and a color camera arranged side by side on the same horizontal line, the apparatus comprising: The acquisition module is used for synchronously acquiring the infrared image and the color image; The first processing module is used for carrying out feature matching on the infrared image and the color image to obtain target features, wherein the target features are used for indicating the outline and the category of all objects included in the infrared image or the color image; the second processing module is used for obtaining a depth map according to the target characteristics, the infrared image and the color image, wherein the depth map is used for indicating the outline, the category and the depth information of all objects; the third processing module is used for establishing a target light field based on a first mapping relation between the depth map and the color image on the basis of the depth map, wherein the first mapping relation comprises depth information of the same object in the depth map and color information in the color image; And the output module is used for obtaining a target color image according to the depth map and the color image based on the target light field.
  9. 9. An electronic device, the electronic device comprising: A processor; A memory having stored thereon computer readable instructions which, when executed by the processor, implement the image processing method of any of claims 1 to 7.
  10. 10. A computer-readable storage medium, in which a program code is stored, the program code being callable by a processor or an electronic device to perform the image processing method according to any one of claims 1 to 7.

Description

Image processing method and device, electronic equipment and storage medium Technical Field The present application relates to the field of image processing technologies, and in particular, to an image processing method, an image processing device, an electronic device, and a storage medium. Background With the continuous development of computational imaging technology, a new generation of imaging technology combining optical hardware, image sensors and algorithm software is increasingly used. The thermal infrared image processing technology relates to digital signal processing, image enhancement and feature extraction of images acquired by a thermal infrared imager so as to realize effective analysis and application. Thermal infrared image processing techniques can enhance the recognizability of graphics in scenes with poor light conditions. Thermal infrared image processing has important applications in a number of fields including industrial detection, medical diagnostics, security monitoring, and military reconnaissance. Since infrared images generally have problems of low contrast and high noise, efficient processing is required to improve image quality and analysis capability. Disclosure of Invention In view of this, the embodiments of the present application provide an image processing method, an apparatus, an electronic device, and a storage medium, which can meet the recognition requirement in a complex scene, output a high-precision color image, and reduce the cost of a vehicle. The application adopts the following technical scheme. In a first aspect, an embodiment of the present application provides an image processing method, applied to a first processing module, where the first processing module includes an infrared camera and a color camera that are disposed side by side, and the method includes: The method comprises the steps of synchronously obtaining an infrared image and a color image, carrying out feature matching on the infrared image and the color image to obtain target features, wherein the target features are used for indicating the outline and the category of all objects contained in the infrared image or the color image, obtaining a depth map according to the target features, the infrared image and the color image, the depth map is used for indicating the outline, the category and the depth information of all objects, establishing a target light field based on a first mapping relation between the depth map and the color image on the basis of the depth map, the first mapping relation comprises the depth information of the same object in the depth map and the color information in the color image, the target light field is used for indicating the light field information of the object determined according to the depth map and the first mapping relation, and the target color image is obtained according to the depth map and the color image. In some embodiments, prior to synchronously acquiring the infrared image and the color image, the method further comprises: and performing parameter calibration on the infrared camera and the color camera according to the multiple groups of images, and determining the baseline length and the pixel focal length between the infrared camera and the color camera. In some embodiments, a depth map is derived from a target feature, an infrared image, and a color image, the method comprising: The method comprises the steps of matching all pixel points in an infrared image and a color image to obtain first parallax, obtaining target depth information of the pixel points according to the first parallax, a base line length and a pixel focal length, and obtaining a depth map according to the target depth information and target characteristics of the pixel points. In some embodiments, a first network module is deployed in the first processing module, the first network module being configured to instruct predicting depth information of an infrared image and evaluating confidence of the depth information, and obtaining target depth information of a pixel point according to a first parallax, a baseline length, and a pixel focal length, including: The method comprises the steps of multiplying the base line length by a pixel focal length, dividing the base line length by a first parallax to obtain first depth information, inputting the first depth information into a first network module to obtain first confidence coefficient of the first depth information, inputting an infrared image into the first network module to obtain second depth information and second confidence coefficient of the second depth information, comparing the first confidence coefficient with the second confidence coefficient, and determining depth information with high confidence coefficient as target depth information of a pixel point. In some embodiments, before establishing the target light field based on the first mapping relationship between the depth map and the color image on the basis of the d