Search

CN-121998863-A - Unmanned aerial vehicle aerial image deblurring method, device, equipment and program product

CN121998863ACN 121998863 ACN121998863 ACN 121998863ACN-121998863-A

Abstract

The invention discloses an unmanned aerial vehicle aerial image deblurring method, device, equipment and program product, and relates to the technical field of image processing. The method comprises the steps of back-projecting target feature points under a camera coordinate system based on first target depth values and camera internal parameters corresponding to the same pixel position of the target feature points in a target depth map to obtain target three-dimensional coordinate points, re-projecting the target three-dimensional coordinate points to an imaging plane according to a target camera discrete pose sequence to obtain a target discrete projection point sequence, encoding each target discrete projection point in the target discrete projection point sequence to obtain a target track point thermodynamic diagram, and processing the target fuzzy image and the target track point thermodynamic diagram by adopting a pre-trained image deblurring model to obtain a target deblurring image. The technical scheme of the embodiment of the invention realizes the image deblurring of the unmanned aerial vehicle in the aerial photographing scene, and improves the real-time performance and the robustness of the unmanned aerial vehicle aerial photographing image deblurring.

Inventors

  • LIU YU
  • DONG SHIWEI
  • Li Aowei
  • LI SHUHUA

Assignees

  • 北京市农林科学院信息技术研究中心

Dates

Publication Date
20260508
Application Date
20260121

Claims (10)

  1. 1. An unmanned aerial vehicle aerial image deblurring method, the method comprising: acquiring a target fuzzy image acquired by unmanned aerial vehicle aerial photography, a target feature point in the target fuzzy image, a target camera discrete pose sequence of a camera in a world coordinate system in an exposure time interval of the target fuzzy image and a target depth map; based on a first target depth value and a camera internal reference, which correspond to the same pixel position of the target feature point in the target depth map, back-projecting the target feature point to a camera coordinate system to obtain a target three-dimensional coordinate point; re-projecting the target three-dimensional coordinate points to an imaging plane according to the target camera discrete pose sequence to obtain a target discrete projection point sequence; Encoding each target discrete projection point in the target discrete projection point sequence to obtain a target track point thermodynamic diagram; and processing the target blurred image and the target track point thermodynamic diagram by adopting a pre-trained image deblurring model to obtain a target deblurring image.
  2. 2. The method of claim 1, wherein processing the target blurred image and the target trajectory point thermodynamic diagram using a pre-trained image deblurring model to obtain a target deblurred image comprises: performing multi-dimensional feature extraction and feature fusion on the target fuzzy image and the target track point thermodynamic diagram through a multi-scale feature fusion unit to obtain a multi-dimensional target fusion feature diagram; Through a multi-scale attention enhancement unit, carrying out channel attention enhancement, space attention enhancement and vegetation area attention enhancement on the target fusion feature map of each dimension to obtain a target enhancement feature map of a plurality of dimensions; and carrying out convolution reconstruction on the target enhancement feature images with multiple dimensions through an output unit to obtain a target deblurring image.
  3. 3. The method according to claim 2, wherein the convolutionally reconstructing the target enhancement feature map with multiple dimensions by the output unit, to obtain a target deblurred image, includes: Performing convolution reconstruction on a first target enhancement feature map of the topmost dimension through a first output subunit to obtain a first target deblurring image; And screening the target bottom layer dimension in each residual dimension through a second output subunit, and carrying out convolution reconstruction and up-sampling on a second target enhancement feature map of the target bottom layer dimension to obtain a second target deblurring image.
  4. 4. A method according to claim 3, wherein the training process of the image deblurring model comprises: acquiring a sample blurred image, a sample track point thermodynamic diagram and a sample clear image; Performing multi-dimensional feature extraction and feature fusion on the sample blurred image and the sample track point heating power through a multi-scale feature fusion unit to obtain a multi-dimensional sample fusion feature map; through a multi-scale attention enhancement unit, carrying out channel attention enhancement, space attention enhancement and vegetation area attention enhancement on the sample fusion feature graphs of all the dimensions to obtain sample enhancement feature graphs of multiple dimensions; performing convolution reconstruction and upsampling on a first sample enhancement feature map of the topmost dimension through a first output subunit to obtain a first sample deblurring image; Screening sample background layer dimensions in each residual dimension through a second output subunit, and performing convolution reconstruction on a second sample enhancement feature map of the sample background layer dimensions to obtain a second sample deblurring image; calculating a first difference between the first sample deblurred image and the sample sharp image; Calculating a second difference between the second sample deblurred image and the sample sharp image; adopting vegetation area enhancement loss weights of all pixel points in the second sample deblurring image, and carrying out weighted summation on the first difference and the second difference to obtain multi-scale loss; And according to the multi-scale loss, performing parameter adjustment on the multi-scale feature fusion unit, the multi-scale attention enhancement unit, the first output subunit and the second output subunit of the image deblurring model.
  5. 5. The method according to claim 2, further comprising, before the performing, by the multi-scale feature fusion unit, multi-dimensional feature extraction and feature fusion on the target blurred image and the target trajectory point thermodynamic diagram to obtain a multi-dimensional target fusion feature map: decomposing the target blurred image through an illumination self-adaptive unit to obtain an illumination component and a reflection component; uniformly correcting the illumination component through the illumination self-adaptive unit; And reconstructing the uniformly corrected illumination component and the reflection component through the illumination self-adapting unit to obtain the target blurred image after illumination enhancement.
  6. 6. The method of claim 1, wherein encoding each target discrete projection point in the sequence of target discrete projection points to obtain a trace point thermodynamic diagram comprises: Dividing the target discrete projection point sequence according to a preset duration to obtain each unit discrete projection point sequence; Fitting each unit discrete projection point sequence to obtain a unit projection track, and determining a unit projection endpoint and a unit projection control point based on the unit projection track; encoding each unit projection endpoint and each unit projection control point respectively to obtain an endpoint thermodynamic diagram and a control point thermodynamic diagram; Correspondingly, the processing the target blurred image and the track point thermodynamic diagram by adopting a pre-trained image deblurring model to obtain the target deblurring image comprises the following steps: And processing the target blurred image, the endpoint thermodynamic diagram and the control point thermodynamic diagram by adopting a pre-trained image deblurring model to obtain the target deblurring image.
  7. 7. The method of claim 1, further comprising, prior to said encoding each target discrete projection point in said sequence of target discrete projection points to obtain a trace point thermodynamic diagram: calculating target pixel displacement between the target discrete projection points according to the second target depth value of each target discrete projection point, the target camera translation vector between each target discrete projection point and the camera internal reference; calculating the target depth weight corresponding to each target discrete projection point by adopting a depth weight function according to the second target depth value of each target discrete projection point in the target discrete projection point sequence; Correcting the displacement of each target pixel according to the weight of each target depth; and correcting each target discrete projection point in the target discrete projection point sequence by adopting each target pixel displacement.
  8. 8. An unmanned aerial vehicle aerial image deblurring device, the device comprising: the system comprises a fuzzy image acquisition module, a target depth map and a target camera discrete pose sequence, wherein the fuzzy image acquisition module is used for acquiring a target fuzzy image acquired by aerial photography of an unmanned aerial vehicle, target feature points in the target fuzzy image, and a target camera discrete pose sequence and a target depth map of a camera in a world coordinate system in an exposure time interval of the target fuzzy image; The characteristic point back projection module is used for back projecting the target characteristic point to a camera coordinate system based on a first target depth value and a camera internal reference corresponding to the same pixel position of the target characteristic point in the target depth map to obtain a target three-dimensional coordinate point; The discrete projection point sequence generation module is used for re-projecting the target three-dimensional coordinate point to an imaging plane according to the target camera discrete pose sequence to obtain a target discrete projection point sequence; the track thermodynamic diagram generating module is used for encoding each target discrete projection point in the target discrete projection point sequence to obtain a target track point thermodynamic diagram; and the deblurring image generating module is used for processing the target blurred image and the target track point thermodynamic diagram by adopting a pre-trained image deblurring model to obtain a target deblurring image.
  9. 9. An electronic device, the electronic device comprising: At least one processor, and A memory communicatively coupled to the at least one processor, wherein, The memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the unmanned aerial vehicle image deblurring method of any of claims 1-7.
  10. 10. A computer program product, characterized in that the computer program product comprises a computer program which, when executed by a processor, implements the unmanned aerial vehicle aerial image deblurring method according to any one of claims 1-7.

Description

Unmanned aerial vehicle aerial image deblurring method, device, equipment and program product Technical Field The invention relates to the technical field of image processing, in particular to an unmanned aerial vehicle aerial image deblurring method, device, equipment and program product. Background With the development of information technology, the traditional field investigation mode is accelerating to change to the intelligent and convenient mode. The development and the cooperation of novel soil remote sensing, sensing technology and observation platform can realize the long-term continuous monitoring of agricultural land resources and related environmental factors, and are becoming an important means for the investigation and the monitoring of agricultural land resources. The unmanned aerial vehicle has the advantages of flexibility, low cost, high efficiency and the like, and is widely applied to agricultural land resource investigation, characteristic crop growth monitoring, disaster assessment, environment monitoring, fine management and other scenes. Through carrying on multisource sensors such as camera, IMU (Inertial Measurement Unit ), GNSS (Global Navigation SATELLITE SYSTEM, global navigation satellite system) and laser radar, unmanned aerial vehicle can acquire the image data of taking photo by plane of a large scale, high resolution, for the agricultural land resource utilization of characteristic and agricultural production monitoring provide important data support. However, in the actual flight process, the unmanned aerial vehicle is easily affected by factors such as attitude disturbance, airflow disturbance, platform vibration and the like, and the camera can displace or shake during exposure, so that the acquired aerial image has a motion blur phenomenon. The existing image deblurring method is generally established under the simplifying assumptions such as static scenes, uniform convolution blur kernels and the like, and the image deblurring is carried out by adopting a traditional deep learning method. But the blur caused by the unmanned aerial vehicle motion belongs to the combined result of three-dimensional motion, depth correlation and non-uniform blur. The existing image deblurring method has limited recovery capability under a complex unmanned aerial vehicle motion scene, and is not suitable for an unmanned aerial vehicle aerial shooting scene. Therefore, there is a need for an image deblurring method that can adapt to the characteristics of an aerial scene of an unmanned aerial vehicle. Disclosure of Invention The invention provides an unmanned aerial vehicle aerial image deblurring method, device, equipment and program product, which realize image deblurring in an unmanned aerial vehicle aerial scene and improve the real-time performance and robustness of unmanned aerial vehicle aerial image deblurring. According to an aspect of the present invention, there is provided an unmanned aerial vehicle aerial image deblurring method, the method comprising: acquiring a target fuzzy image acquired by unmanned aerial vehicle aerial photography, a target feature point in the target fuzzy image, a target camera discrete pose sequence of a camera in a world coordinate system in an exposure time interval of the target fuzzy image and a target depth map; based on a first target depth value and a camera internal reference, which correspond to the same pixel position of the target feature point in the target depth map, back-projecting the target feature point to a camera coordinate system to obtain a target three-dimensional coordinate point; re-projecting the target three-dimensional coordinate points to an imaging plane according to the target camera discrete pose sequence to obtain a target discrete projection point sequence; Encoding each target discrete projection point in the target discrete projection point sequence to obtain a target track point thermodynamic diagram; and processing the target blurred image and the target track point thermodynamic diagram by adopting a pre-trained image deblurring model to obtain a target deblurring image. According to another aspect of the present invention, there is provided an unmanned aerial vehicle aerial image deblurring device, the device comprising: the system comprises a fuzzy image acquisition module, a target depth map and a target camera discrete pose sequence, wherein the fuzzy image acquisition module is used for acquiring a target fuzzy image acquired by aerial photography of an unmanned aerial vehicle, target feature points in the target fuzzy image, and a target camera discrete pose sequence and a target depth map of a camera in a world coordinate system in an exposure time interval of the target fuzzy image; The characteristic point back projection module is used for back projecting the target characteristic point to a camera coordinate system based on a first target depth value and a camera internal reference corresponding to the s