Search

CN-121981919-A - Intelligent AI aerial video rendering method based on scenic spot unmanned aerial vehicle

CN121981919ACN 121981919 ACN121981919 ACN 121981919ACN-121981919-A

Abstract

The invention relates to the technical field of video rendering, in particular to an AI aerial video intelligent rendering method based on a scenic spot unmanned aerial vehicle, which comprises the steps of obtaining aerial video stream data, cradle head stability augmentation log data and shooting scene metadata of the unmanned aerial vehicle; the method comprises the steps of carrying out disturbance degree evaluation on each frame of picture based on video stream data and stability enhancement log data, carrying out scene restoration intensity evaluation based on shooting scene metadata, and combining and analyzing a picture disturbance degree evaluation result and a scene restoration intensity evaluation result to obtain an AI intelligent rendering restoration strategy of a current video segment. The invention solves the problems of single evaluation dimension, lack of scene perception and stiff restoration strategy in the prior art when the coupled visual distortion caused by the common effects of the cradle head stability enhancement, the natural medium motion and the complex depth structure is processed.

Inventors

  • WANG XIAOCHUAN

Assignees

  • 太仓市数字经济科技发展有限公司

Dates

Publication Date
20260505
Application Date
20260123

Claims (9)

  1. 1. The intelligent AI aerial video rendering method based on the scenic spot unmanned aerial vehicle is characterized by comprising the following steps: S1, acquiring aerial video stream data, cradle head stability augmentation log data and shooting scene metadata of an unmanned aerial vehicle; s2, evaluating disturbance degree of each frame of picture based on video stream data and stability augmentation log data; s3, performing scene restoration intensity evaluation based on shooting scene metadata; And S4, combining and analyzing the picture disturbance degree evaluation result and the scene repair strength evaluation result to obtain an AI intelligent rendering repair strategy of the current video segment.
  2. 2. The intelligent AI-aerial video rendering method based on the scenic spot unmanned aerial vehicle of claim 1, wherein S1 comprises: S11, acquiring unmanned aerial vehicle aerial video stream image data through an onboard multispectral vision sensor, and then executing two real-time processes on each frame of image in the video stream image data in parallel through an onboard edge computing unit to obtain a natural medium region segmentation mask and a pixel-level depth map; s12, acquiring holder stability augmentation log data through an internal bus of a flight control system, wherein the holder stability augmentation log data comprises a time stamp, a micro-arc degree-level compensation angle sequence of three axes of the holder and compensation action frequency; S13, acquiring shooting scene metadata through an airborne environment sensing unit and a preset geographic information database, wherein the shooting scene metadata comprise environment wind speed data, scene depth span identification and scene type semantic tags; the ambient wind speed data Includes real-time ambient wind speed and wind direction data.
  3. 3. The intelligent AI-based aerial video rendering method based on the scenic spot unmanned aerial vehicle of claim 2, wherein S2 comprises: s21, obtaining a holder injection disturbance evaluation result by a compensation angle sequence and a compensation action frequency in holder stability augmentation log data; S22, obtaining a medium motion distortion evaluation result by using a natural medium region segmentation mask and optical flow field data in video stream data; S23, obtaining a scene deep artifact estimation result from a pixel-level depth map and multi-frame displacement data in video stream data; S24, obtaining a holder injection disturbance evaluation result, a medium motion distortion evaluation result and a depth layer artifact evaluation result, weighting the holder injection disturbance evaluation result, the medium motion distortion evaluation result and the depth layer artifact evaluation result, and adding to obtain a picture disturbance degree evaluation result.
  4. 4. The intelligent AI aerial video rendering method based on the scenic spot unmanned aerial vehicle according to claim 3, wherein the specific step S21 is that the cradle head injection disturbance evaluation is carried out according to a compensation angle sequence in cradle head stability augmentation log data, wherein the cradle head injection disturbance evaluation process is that the time-frequency transformation is carried out on the compensation angle sequence, the frequency spectrum energy occupation ratio of the cradle head injection disturbance evaluation in the frequency range from 5Hz to 50Hz is calculated, and the injection intensity of high-frequency unnatural mechanical shake is quantized.
  5. 5. The intelligent AI aerial video rendering method based on the scenic spot unmanned aerial vehicle of claim 4, wherein the specific step of S22 is to perform medium motion distortion evaluation according to a natural medium region segmentation mask and optical flow field data in video stream data, wherein the medium motion distortion evaluation process is to extract an optical flow vector corresponding to the natural medium region segmentation mask, calculate information entropy of optical flow direction distribution, and quantify the confusion degree of a natural medium motion rule.
  6. 6. The intelligent AI-video-on-air rendering method based on a scenic spot unmanned aerial vehicle of claim 5, wherein S23 further comprises: S231, performing depth-of-field layer division according to the pixel-level depth-of-field map in the video stream data, and dividing the picture into depth-of-field layers of different levels according to the depth-of-field value, wherein the depth-of-field layer division comprises a near-field layer Middle view layer And a distant view layer ; S232, calculating the module length variance of the displacement vector of the pixel points in the depth layers of different layers among continuous multi-frames, extracting the maximum value of the module length variance of three depth layers of the scene, and normalizing to obtain the evaluation result of the depth layer artifact.
  7. 7. The intelligent AI aerial video rendering method based on the scenic spot unmanned aerial vehicle of claim 6, wherein the step S3 comprises performing scene restoration intensity evaluation according to the environmental wind speed data, the depth of field span identification and the scene type semantic tag in the shooting scene metadata, wherein the scene restoration intensity evaluation result obtaining process comprises the steps of Mapped to an environmental disturbance factor The formula is: Wherein, the method comprises the steps of, As a parameter of the slope, Is the reference wind speed; according to the scene typical depth span identification, a predefined depth span factor mapping table is queried, and the depth span identification is mapped into a depth span factor ; According to the scene type semantic tags, a predefined medium mobility factor mapping table is queried, and the scene type semantic tags are mapped into medium mobility factors ; Then the environmental disturbance factor is calculated Depth of field span factor Medium mobility factor Weighted and added to obtain a scene repair strength evaluation result The formula is: Wherein 、 And Are all preset weight coefficients, and 。
  8. 8. The intelligent AI-video-on-air rendering method based on the scenic spot unmanned aerial vehicle of claim 7, wherein S4 comprises: S41, evaluating the picture disturbance degree And scene repair intensity assessment results Substituting the obtained value into a linear decision function, and obtaining a repair strength demand value by weighted summation 。
  9. 9. The intelligent AI-video-on-air rendering method based on a scenic spot unmanned aerial vehicle of claim 8, wherein S4 further comprises: s42, restoring the required strength value And a preset rendering repair intensity threshold The comparison is carried out and the corresponding repair strategy is executed, specifically as follows: When (when) When the method is used, a light restoration strategy mainly comprising time domain noise reduction is adopted; When (when) When the method is used, a moderate restoration strategy combined with motion compensation is adopted; When (when) And when the depth restoration strategy based on multi-frame time sequence reconstruction and scene flow estimation is started.

Description

Intelligent AI aerial video rendering method based on scenic spot unmanned aerial vehicle Technical Field The invention relates to the technical field of video rendering, in particular to an AI aerial video intelligent rendering method based on a scenic spot unmanned aerial vehicle. Background The traditional intelligent rendering method of the unmanned aerial vehicle aerial video mainly relies on a general image processing algorithm, and has obvious defects when dealing with complex natural scenes of scenic spots; in the prior art, the original video stream is generally subjected to global debouncing or enhancement treatment directly, high-frequency unnatural jittering introduced by a cradle head active stabilization system when compensating for disturbance of flight attitude is ignored, and the mechanical compensation is mutually coupled with the original motion of natural media (such as cloud, forest and water) in a picture, so that the picture after debouncing has motion law distortion, dynamic region blurring or texture detail loss; in addition, the prior art fails to carry out cooperative analysis on the environment disturbance intensity, medium flow characteristic and depth of field structure information, so that the restoration strategy is single and lacks adaptation, for example, the flowing cloud sea scene under the strong wind condition is influenced by the natural drift sense of cloud layers, and the high-frequency vibration possibly remains on the calm lake surface due to the insufficient restoration intensity, on the other hand, the analysis method based on the pure vision is difficult to quantify the actual injection disturbance of the high-frequency micro-compensation action recorded in the cloud deck stabilization enhancement log to the picture, thereby failing to accurately evaluate the degree of the stabilization enhancement action itself as a new vibration source, in a word, the prior art is used for processing the common visual coupling distortion caused by the cloud deck enhancement, the natural medium movement and the complex structure, the method has the problems of single evaluation dimension, scene perception deficiency and stiff restoration strategy, and is difficult to realize high-quality intelligent restoration of multi-level and multi-type disturbance in aerial pictures of scenic spots on the premise of keeping the sense of reality of natural motion vision. Disclosure of Invention This section is intended to outline some aspects of embodiments of the application and to briefly introduce some preferred embodiments. Some simplifications or omissions may be made in this section as well as in the description of the application and in the title of the application, which may not be used to limit the scope of the application. Aiming at the problems of single evaluation dimension, lack of scene perception and stiff restoration strategy in the prior art when coupling visual distortion caused by the common effects of cradle head stability augmentation, natural medium movement and complex depth of field structure is processed, the invention provides an intelligent AI aerial video rendering method based on a scenic spot unmanned aerial vehicle. In order to achieve the purpose, the technical scheme of the intelligent AI aerial video rendering method based on the scenic spot unmanned aerial vehicle comprises the following steps: S1, acquiring aerial video stream data, cradle head stability augmentation log data and shooting scene metadata of an unmanned aerial vehicle; s2, evaluating disturbance degree of each frame of picture based on video stream data and stability augmentation log data; s3, performing scene restoration intensity evaluation based on shooting scene metadata; And S4, combining and analyzing the picture disturbance degree evaluation result and the scene repair strength evaluation result to obtain an AI intelligent rendering repair strategy of the current video segment. Preferably, S1 comprises: S11, acquiring unmanned aerial vehicle aerial video stream image data through an onboard multispectral vision sensor, and then executing two real-time processes on each frame of image in the video stream image data in parallel through an onboard edge computing unit to obtain a natural medium region segmentation mask and a pixel-level depth map; s12, acquiring holder stability augmentation log data through an internal bus of a flight control system, wherein the holder stability augmentation log data comprises a time stamp, a micro-arc degree-level compensation angle sequence of three axes of the holder and compensation action frequency; S13, acquiring shooting scene metadata through an airborne environment sensing unit and a preset geographic information database, wherein the shooting scene metadata comprise environment wind speed data, scene depth span identification and scene type semantic tags; the ambient wind speed data Includes real-time ambient wind speed and wind direction data