Search

CN-122027905-A - Image enhancement processing method and system for unmanned aerial vehicle monitoring video stream

CN122027905ACN 122027905 ACN122027905 ACN 122027905ACN-122027905-A

Abstract

Responding to a trigger instruction of an unmanned aerial vehicle entering a hover verification state, extracting a current exposure line index, resolving a target beam projection angle corresponding to the current exposure line index, generating a scanning driving signal based on a linkage mapping relation of the current exposure line index and the target beam projection angle, carrying out progressive scanning exposure, limiting the space coverage range of a physical projection beam to be in a physical space slice corresponding to a pixel line in an exposure opening state at present, and keeping environment near-field suspended particles outside the physical space slice in a non-illumination state; the optical signals reflected via the physical spatial slices are collected, generating enhanced video frames that suppress near field scattering interference. The application makes the onboard linear light source and the exposure line synchronously scan, and strictly limits the coverage of the light beam in the physical space slice corresponding to the current exposure line, so that near-field dust is not illuminated and scattered light is not generated.

Inventors

  • JIAO DEQIANG
  • XU XIAOYUE
  • XU ZHIZHOU

Assignees

  • 江苏风磐科技有限公司
  • 南京瀚途睿通科技有限公司

Dates

Publication Date
20260512
Application Date
20260410

Claims (10)

  1. 1. An image enhancement processing method for a monitoring video stream of an unmanned aerial vehicle is characterized by comprising the following steps: Responding to a trigger instruction of the unmanned aerial vehicle entering a hover-check state, extracting a current exposure line index and resolving a target beam projection angle corresponding to the current exposure line index; Generating a scanning driving signal based on the linkage mapping relation between the current exposure line index and the target beam projection angle, and executing progressive scanning exposure operation according to the scanning driving signal; In the progressive scanning exposure operation process, limiting the space coverage range of a physical projection beam to a physical space slice corresponding to a pixel row in an exposure on state, so that the environment near-field suspended particles outside the physical space slice are kept in an illumination-free state; and acquiring optical signals reflected by the physical space slice, and generating an enhanced video frame for inhibiting near-field scattering interference.
  2. 2. The method for image enhancement processing of a surveillance video stream of a drone according to claim 1, wherein extracting a current exposure line index and resolving a target beam projection angle corresponding to the current exposure line index comprises: Determining a row address of an effective pixel row currently in a charge integration state based on a rolling shutter time sequence of an onboard image sensor, and taking the row address as a current exposure row index; establishing a row angle corresponding relation between each pixel row of the airborne image sensor and the projection angle of the airborne linear light source according to imaging geometric constraint of the hovering position of the unmanned aerial vehicle in the limited space; and mapping the current exposure row index into a target beam projection angle based on the row angle correspondence.
  3. 3. The method for enhancing and processing the image of the unmanned aerial vehicle monitoring video stream according to claim 2, wherein the step of establishing a row angle correspondence between each pixel row of the onboard image sensor and the projection angle of the onboard linear light source comprises the following steps: At the calibrated hover height in the limited space, taking the wall surface of the limited space as a calibrated reference surface; The airborne linear light source is controlled to project linear light spots in a vertical scanning range in a gradual angle mode according to the stepping amount of the calibration angle, and under each projection angle, the linear light spots form corresponding projection positions on the calibration reference plane; acquiring calibration image frames corresponding to each projection angle by the airborne image sensor, and extracting response pixel row indexes of the linear light spot projections from the calibration image frames; and associating each projection angle with a corresponding response pixel row index, and establishing a corresponding relation of the row angles.
  4. 4. The method for image enhancement processing of a surveillance video stream of a drone according to claim 1, wherein generating a scan driving signal based on a linkage mapping relationship between the current exposure line index and a target beam projection angle comprises: Determining the switching time of the current exposure row index according to the row synchronous clock of the airborne image sensor; converting a target beam projection angle corresponding to a current exposure line index into a vibrating mirror deflection quantity of an airborne linear light source at the switching moment of the current exposure line index; And packaging the switching time of the deflection quantity of the galvanometer and the current exposure line index into the scanning driving signal, wherein the scanning driving signal is used for driving the galvanometer of the airborne linear light source to finish the beam deflection of a corresponding angle.
  5. 5. The method for image enhancement processing of a surveillance video stream of a drone of claim 4, wherein converting the target beam projection angle corresponding to the current exposure line index to a galvanometer deflection of an on-board linear light source comprises: According to the geometric magnification between the mechanical deflection angle of the vibrating mirror and the deflection angle of the emergent light beam of the airborne linear light source, determining the angle conversion ratio between the projection angle of the target light beam and the deflection amount of the vibrating mirror; and taking the ratio of the projection angle of the target beam to the angle conversion ratio as the deflection amount of the vibrating mirror corresponding to the current exposure row index.
  6. 6. The method for image enhancement processing of a surveillance video stream of a drone of claim 1, wherein limiting spatial coverage of a physical projection beam to a physical spatial slice corresponding to a pixel row currently in an exposure on state comprises: Determining the width of an illumination band formed by the physical projection light beam at the wall surface of the limited space according to the beam divergence angle and the current projection direction of the airborne linear light source; Determining the slice height span of the physical space slice corresponding to the current exposure line index at the wall surface of the limited space based on the field angle coverage corresponding to the current exposure line index; according to the width difference value of the illumination band width and the slice height span, the beam shaping state of the on-board linear light source is adjusted to enable the illumination band width to be converged in the range of the slice height span; The physical space slice is a flat conical space region which extends to the wall surface of the limited space along the optical axis direction by taking the optical center of the unmanned aerial vehicle lens as a vertex and the field angle corresponding to the current exposure line index as an opening angle.
  7. 7. The method for image enhancement processing of a surveillance video stream of a drone of claim 6, wherein maintaining ambient near field suspended particles outside of the physical spatial slice in an off-illumination state comprises: According to the cross-sectional size of the limited space, the rotating speed of the rotor wing of the unmanned aerial vehicle and the current hovering height, determining the dust concentration space distribution formed by the downwash airflow of the rotor wing of the unmanned aerial vehicle in the limited space; Determining a critical position when the dust concentration is attenuated to a point that particle scattered light does not interfere with imaging of an airborne image sensor based on the dust concentration space distribution, and taking the distance between the critical position and a lens of the unmanned aerial vehicle as a near-field boundary distance; During the progressive exposure operation, an effective illuminance distribution of the physical projection beam is constrained within a spatial region outside the near-field boundary distance such that airborne particles within the near-field boundary distance are not illuminated.
  8. 8. The method for image enhancement processing of a unmanned aerial vehicle surveillance video stream as set forth in claim 1, further comprising a light source sensor synchronization calibration operation after responding to a trigger instruction for the unmanned aerial vehicle to enter a hover-check state and before extracting a current exposure row index: acquiring a rolling shutter line scanning period and an inter-line delay time length of an airborne image sensor; determining the angular stepping rate of an onboard linear light source according to the rolling shutter line scanning period, so that the angular stepping rate and the rolling shutter line scanning period form time domain locking; determining the projection trigger delay of the airborne linear light source according to the inter-line delay time length so as to compensate the response time difference between exposure line switching and beam angle switching; Based on the angle stepping rate and the projection trigger delay, verifying whether the time domain deviation between the switching time of the actual projection angle of the light beam and the switching time of the current exposure line index meets the synchronous precision requirement, and completing synchronous calibration operation of the light source sensor when the time domain deviation meets the synchronous precision requirement.
  9. 9. The method for image enhancement processing of a surveillance video stream of an unmanned aerial vehicle of claim 8, wherein verifying whether a temporal deviation between a switching instant of a projection angle of the light beam and a switching instant of a current exposure line index meets a synchronization accuracy requirement comprises: Under a calibration test mode, the airborne linear light source is controlled to project scanning light beams to the limited space wall surface according to the angle stepping rate and the projection trigger delay, and the airborne image sensor is synchronously controlled to complete progressive exposure acquisition according to the progressive scanning period of the shutter; extracting actual measurement light spot positions received by each pixel row from the test image frame acquired by the progressive exposure, comparing the actual measurement light spot positions with calibration light spot positions recorded in a row angle corresponding relation between each pixel row of the airborne image sensor and the projection angle of the airborne linear light source, and determining light spot position offset of each pixel row; when the light spot position offset of each pixel row is in the allowable offset range, judging that the time domain deviation meets the requirement of synchronous precision; When the spot position offset of the pixel row exceeds the allowable offset range, converting the spot position offset exceeding the allowable offset range into an angle offset according to the known distance between the wall surface of the limited space and the unmanned aerial vehicle lens, correcting the projection trigger delay according to the angle offset, and re-executing the step of judging whether the time domain offset between the switching time of the actual projection angle of the verification light beam and the switching time of the current exposure row index meets the requirement of synchronous precision.
  10. 10. An image enhancement processing system for a monitoring video stream of an unmanned aerial vehicle, which adopts the image enhancement processing method for the monitoring video stream of the unmanned aerial vehicle according to any one of claims 1 to 9, and is characterized by comprising the following steps: The parameter resolving module is used for responding to a trigger instruction of the unmanned aerial vehicle entering a hovering verification state, extracting a current exposure line index and resolving a target beam projection angle corresponding to the current exposure line index; the scanning control module generates a scanning driving signal based on the linkage mapping relation between the current exposure line index and the target beam projection angle, and executes progressive scanning exposure operation according to the scanning driving signal; the illumination control module is used for limiting the space coverage range of the physical projection light beam to a physical space slice corresponding to a pixel row in an exposure opening state at present in the progressive scanning exposure operation process, so that the environment near-field suspended particles outside the physical space slice are kept in an illumination-free state; and the image acquisition module is used for acquiring the optical signals reflected by the physical space slice and generating an enhanced video frame for inhibiting near-field scattering interference.

Description

Image enhancement processing method and system for unmanned aerial vehicle monitoring video stream Technical Field The application relates to the technical field of data processing, in particular to an image enhancement processing method and system for an unmanned aerial vehicle monitoring video stream. Background Currently, in order to ensure the definition of the image quality of the returned video stream, unmanned aerial vehicle monitoring generally integrates a series of conventional image enhancement processing modules in a video processing link, and mainly aims at coping with outdoor illumination changes or slight weather disturbances through a digital image processing algorithm, for example, using an electronic image stabilizing technology to eliminate image blurring caused by body shake, or using a dark channel prior algorithm and a histogram equalization technology to defogging and contrast stretching on a dark video image in haze weather. That is, the image enhancement processing method based on the software algorithm can be used for improving the visual effect of the monitoring video stream in a general outdoor open scene. However, in an extremely limited space such as a mine tunnel or a closed tunnel, when the unmanned aerial vehicle is switched from a flight state to a hovering verification state, a rotor wing of the unmanned aerial vehicle generates a downward washing air flow and cannot diffuse in the closed tubular space, so that dust accumulation on the ground can be possibly lifted, and then the dust accumulation forms a high-concentration dust cloud in a near-field area in front of a lens. After the near-field dust is illuminated by the onboard light supplementing lamp of the unmanned aerial vehicle, the dust is scattered severely to form a compact white light curtain, so that an onboard image sensor of the unmanned aerial vehicle is saturated instantly, and an all-white invalid picture is output. Therefore, how to hover and raise dust to cause physical saturation of an onboard image sensor of the unmanned aerial vehicle, so that under the condition that a conventional software algorithm cannot calculate, the texture information of the shielded wall surface can still be effectively restored, so that the image enhancement of a monitoring video stream of the unmanned aerial vehicle is realized, and the technical problem to be solved is urgent. Disclosure of Invention The present application has been made in view of the above-described problems. Therefore, the application provides an image enhancement processing method and an image enhancement processing system for an unmanned aerial vehicle monitoring video stream, which can solve the problems in the background technology. In order to solve the technical problems, the application provides the following technical scheme: In a first aspect, the application provides an image enhancement processing method for a monitoring video stream of an unmanned aerial vehicle, which comprises the steps of responding to a trigger instruction of the unmanned aerial vehicle entering a hovering verification state, extracting a current exposure line index and resolving a target beam projection angle corresponding to the current exposure line index; Generating a scanning driving signal based on the linkage mapping relation between the current exposure line index and the target beam projection angle, and executing progressive scanning exposure operation according to the scanning driving signal; In the progressive scanning exposure operation process, limiting the space coverage range of a physical projection beam to a physical space slice corresponding to a pixel row in an exposure on state, so that the environment near-field suspended particles outside the physical space slice are kept in an illumination-free state; and acquiring optical signals reflected by the physical space slice, and generating an enhanced video frame for inhibiting near-field scattering interference. Preferably, the extracting the current exposure line index and resolving the target beam projection angle corresponding to the current exposure line index includes: Determining a row address of an effective pixel row currently in a charge integration state based on a rolling shutter time sequence of an onboard image sensor, and taking the row address as a current exposure row index; establishing a row angle corresponding relation between each pixel row of the airborne image sensor and the projection angle of the airborne linear light source according to imaging geometric constraint of the hovering position of the unmanned aerial vehicle in the limited space; and mapping the current exposure row index into a target beam projection angle based on the row angle correspondence. Preferably, the establishing a row angle correspondence between each pixel row of the airborne image sensor and the projection angle of the airborne linear light source includes: At the calibrated hover height in the limited