CN-121563855-B - Panoramic deep and high dynamic range imaging system and method based on compound eye structure
Abstract
The invention discloses a panoramic deep and high dynamic range imaging system and a method based on a compound eye structure, which relate to the technical field of computational imaging and image processing, and the synchronous acquisition and multi-scale motion estimation ensure the time sequence consistency among sub-images, effectively reduce splicing dislocation and ghosting artifacts caused by rapid movement or partial shielding of objects, and improve the stability of dynamic scenes; the adaptive HDR fusion mechanism based on exposure difference and local characteristics is used for preferentially retaining details of a high-contrast unsaturated region through intelligent weight distribution, so that the problems of overexposure and underexposure under backlight or illumination mutation are overcome, an output image is naturally smooth in a bright-dark transition region, in addition, multi-scale depth of field synthesis and edge perception filtering work cooperatively, the depth estimation accuracy of a low-texture region is enhanced, depth of field jump and edge blurring are avoided, the overall definition of a panoramic deep image is ensured, and meanwhile, dynamic ghost and noise interference are further reduced through rolling shutter correction and space-time interpolation compensation.
Inventors
- HE YONGGANG
- YANG JING
Assignees
- 保升(中国)科技实业有限公司
Dates
- Publication Date
- 20260508
- Application Date
- 20251117
Claims (10)
- 1. The panoramic deep and high dynamic range imaging method based on the compound eye structure is characterized by comprising the following steps of: Step S1, synchronously acquiring a plurality of sub-images by a fly-eye lens array and an image sensor array under the unified clock/trigger, and recording exposure parameters and time stamps of each sub-image; step S2, performing geometric calibration and photometric calibration on the sub-images, wherein the geometric calibration and photometric calibration comprise internal and external parameter estimation, distortion correction, radiation response estimation and exposure normalization; step S3, motion estimation is carried out based on the calibrated sub-images so as to obtain motion information among the sub-images; S4, constructing a shielding mask according to the motion information and distinguishing a shielding area from a non-shielding area; S5, in a non-shielding area, constructing an HDR fusion weight according to the exposure level, the local contrast and the saturation condition and fusing the sub-images; And S6, performing depth of field synthesis on the fusion result to generate a panoramic deep image, and performing local self-adaptive tone mapping on the panoramic deep image to obtain a full depth of field high dynamic range output image.
- 2. The method for panoramic deep high dynamic range imaging based on compound eye architecture according to claim 1, wherein the motion estimation adopts multi-scale pyramid optical flow and combines forward and backward consistency check to obtain pixel-level motion vectors and confidence thereof.
- 3. The method for panoramic deep high dynamic range imaging based on compound eye structure according to claim 2, wherein the constructing of the occlusion mask comprises: determining an occlusion boundary based on the discontinuity of the optical flow vectors, the forward-backward disparity, and the low confidence region; and performing motion-compensated space-time interpolation on the shielding region, and filling the shielding part of the current frame by using the corresponding sub-image of the adjacent frame.
- 4. The method for panoramic deep high dynamic range imaging based on compound eye architecture according to claim 3, wherein sub-images with rolling shutters are time-aligned prior to motion estimation and time-aligned at fusion time stamp.
- 5. The compound eye structure-based panoramic deep high dynamic range imaging method of claim 4, wherein the HDR fusion weights are adaptively assigned based on exposure normalized luminance, local contrast, gradient magnitude, and saturation detection; Higher weight is given to the high-contrast and unsaturated pixels, and weight is reduced to the low-contrast or saturated approaching pixels; the adaptive distribution mode of the HDR fusion weight is as follows: step C1, obtaining exposure normalized brightness on each sub-image And calculating local contrast and gradient magnitude measures, at least one of the local contrast and gradient magnitude measures Is the center and radius Is a window of (2) Carrying out local statistics in the process; Step C2, at the pixel Pair of parts The sub-images are assigned fusion weights: , Wherein, the Represent the first Sub-image in pixel Is used for the normalization of the fusion weights of (a), And (3) with For the index of the sub-picture, For the number of sub-images, For the pixel coordinates, Respectively the normalized amounts of brightness, contrast and gradient, In order to correspond to the mapping function, Is a three-term index weight system, As a function of the saturation penalty, As a saturation penalty coefficient, Summing all sub-images; Step C3, the brightness normalization and exposure suitability mapping adopts linear normalization and Gaussian suitability: , Wherein, the Is in the black-and-white level, In order to expose a suitable center of the wafer, For the bandwidth to be available, To be cut off to Is an operator of (2); The local contrast and its mapping employ fractional compression: , Wherein, the Is a window The standard deviation of the internal brightness of the light source, For contrast compression constant, u represents normalized luminance value; The gradient amplitude and its mapping are compressed in a split manner: , Wherein, the In the case of a first order difference operator, Is a gradient compression constant; The saturation penalty uses a symmetric soft threshold: , Wherein, the Is a high-end threshold value, and the threshold value is set to be high, Is a low-end threshold value, and the threshold value is set to be, To the penalty power.
- 6. The method for panoramic deep high dynamic range imaging based on compound eye architecture according to claim 5, wherein said locally adaptive tone mapping dynamically adjusts compression strength according to image block luminance histogram and structure fidelity constraint, and sets detail protection for high gradient regions.
- 7. The method for panoramic deep high dynamic range imaging based on compound eye architecture of claim 6, wherein said depth of field synthesis comprises: Constructing a multi-scale pyramid of the sub-image, calculating the power metric at each scale to obtain an initial focusing diagram, and generating a panoramic depth weight diagram through inter-scale consistency constraint; the generating step of the panoramic deep weight map comprises the following steps: Step D1, constructing a brightness pyramid by using Gaussian kernels and recording each scale symbol, particularly in the sub-image And (3) obtaining: , Wherein, the Is the first Sub-images are on scale The brightness at which the light is emitted, For the pixel coordinates, Is standard deviation of Is a gaussian kernel of (c) in the (c), In the case of a convolution of the two, As a reference dimension to be used, Is the degree of scale; Step D2, taking the weighted sum of the gradient energy and the laplace response as the power metric at each scale, and performing compression normalization to obtain a comparable value: , Wherein, the Is of a scale Is used to determine the power of the lens, In order to normalize the power, Is a combination of two rights systems, In the case of a first order difference operator, Is a two-norm number of the two-norm, For the discrete laplace operator, Is a compression constant; Step D3, overlapping the power of each scale according to weight, and inhibiting the isolated peak and the non-unimodal form by soft penalty to form the power of independent scale: , Wherein, the For a scale-independent power, Is a scale weight and , For the consistency penalty coefficient, Is a positive operator; Step D4, taking the scale-independent focal power as a data item, and introducing edge perception total variation to obtain a panoramic depth weight map : , , Wherein, the Is the first The panorama depth weight of the sub-image, For the number of sub-images, In order to smooth the coefficient of the coefficient, Is the difference between the horizontal direction and the vertical direction; For the edge-aware weights to be used, For the edge adjustment factor to be used, Is the reference brightness; Step D5, obtaining With HDR weights Multiplying by pixel and normalizing to form a complex weight: , Wherein, the For the composite weights to be used, Represent the first Sub-image in pixel The masking mask is reserved for 1 position, the masking position sets the corresponding item to zero and calls the compensation pixel of the step S4, Representing the panoramic depth weight value of the nth sub-image at pixel location x, Representing the panoramic depth weight value of the mth sub-image at pixel location x.
- 8. The compound eye structure-based panoramic deep high dynamic range imaging method of claim 7, wherein edge-aware filtering is applied to the panoramic deep weight map to enhance local consistency and preferentially preserve high gradient regions.
- 9. The panoramic deep high dynamic range imaging system based on a compound eye structure is based on the panoramic deep high dynamic range imaging method based on a compound eye structure as set forth in any one of claims 1 to 8, and is characterized by comprising: the fly-eye lens array is corresponding to the image sensor array; The synchronous triggering and clock module is used for controlling the image sensor array to synchronously acquire and timestamp; the calibration module is used for executing geometric calibration and photometric calibration and outputting calibration parameters; The image processing unit is electrically connected with the image sensor array and is configured to execute the panoramic deep high dynamic range imaging method based on the compound eye structure according to any one of claims 1-8 and output a full depth high dynamic range image.
- 10. The compound eye structure-based panoramic deep high dynamic range imaging system of claim 9, wherein said image processing unit comprises at least: the system comprises a motion estimation module, a shielding detection and compensation module, an HDR weight construction and fusion module, a depth of field synthesis module, an edge perception consistency module and a tone mapping module, and is provided with a cache and time sequence management.
Description
Panoramic deep and high dynamic range imaging system and method based on compound eye structure Technical Field The invention relates to the technical field of computational imaging and image processing, in particular to a panoramic deep high dynamic range imaging system and method based on a compound eye structure. Background When an unmanned aerial vehicle tracks a moving target in a complex environment such as a forest, a scene often accompanies high-speed movement, frequent shielding and strong and uneven illumination changes, and an imaging system needs to realize low-delay, seamless splicing and high-dynamic-range imaging under the conditions of wide field of view and large depth of field. In recent years, methods of compound eye type multi-aperture camera array combined with feature registration/image stitching, multi-exposure fusion, computational refocusing and the like are used for such tasks, and wide field of view and extended depth of field are realized by synchronous acquisition of multiple sub-cameras. However, in outdoor scenes with concurrent high-speed movement and shielding, the existing method still has the problems of unstable and mismatched characteristics caused by shielding and low texture, seam artifacts and ghosts caused by inconsistent trans-camera/trans-exposure luminosity, difficulty in self-adaption of fusion weights under rapid illumination change and the like, and meanwhile, synchronization and calibration of an array camera on an unmanned plane platform are easily affected by vibration and temperature drift, on-board calculation force and transmission bandwidth are limited, and real-time output quality and stability are further restricted. Disclosure of Invention The present invention has been made in view of the above-described problems occurring in the prior art. The invention provides a panoramic deep high dynamic range imaging system and a method based on a compound eye structure, which solve the problems that the existing compound eye system is easy to generate splicing dislocation, uneven exposure and depth of field estimation errors in a high-speed movement shielding environment and limit real-time imaging quality. In order to solve the technical problems, the invention provides the following technical scheme: In a first aspect, an embodiment of the present invention provides a panoramic deep high dynamic range imaging method based on a compound eye structure, including: Step S1, synchronously acquiring a plurality of sub-images by a fly-eye lens array and an image sensor array under the unified clock/trigger, and recording exposure parameters and time stamps of each sub-image; step S2, performing geometric calibration and photometric calibration on the sub-images, wherein the geometric calibration and photometric calibration comprise internal and external parameter estimation, distortion correction, radiation response estimation and exposure normalization; step S3, motion estimation is carried out based on the calibrated sub-images so as to obtain motion information among the sub-images; S4, constructing a shielding mask according to the motion information and distinguishing a shielding area from a non-shielding area; S5, in a non-shielding area, constructing an HDR fusion weight according to the exposure level, the local contrast and the saturation condition and fusing the sub-images; And S6, performing depth of field synthesis on the fusion result to generate a panoramic deep image, and performing local self-adaptive tone mapping on the panoramic deep image to obtain a full depth of field high dynamic range output image. The panoramic deep high dynamic range imaging method based on the compound eye structure is used for optimizing the panoramic deep high dynamic range imaging method based on the compound eye structure, wherein the motion estimation adopts a multi-scale pyramid optical flow and is combined with forward and backward consistency check to obtain a pixel-level motion vector and the confidence coefficient thereof. As an optimal scheme of the panoramic deep high dynamic range imaging method based on the compound eye structure, the construction of the shielding mask comprises the following steps: determining an occlusion boundary based on the discontinuity of the optical flow vectors, the forward-backward disparity, and the low confidence region; and performing motion-compensated space-time interpolation on the shielding region, and filling the shielding part of the current frame by using the corresponding sub-image of the adjacent frame. The panoramic deep high dynamic range imaging method based on the compound eye structure is an optimal scheme, wherein the sub-images with the rolling shutters are subjected to time sequence correction before motion estimation, and time sequence alignment is carried out on time stamps during fusion. As a preferable scheme of the panoramic deep high dynamic range imaging method based on the compound eye structure, the HDR