Search

CN-121985230-A - Unmanned aerial vehicle image real-time splicing and transmitting method and system

CN121985230ACN 121985230 ACN121985230 ACN 121985230ACN-121985230-A

Abstract

The application provides a method and a system for splicing and transmitting images of unmanned aerial vehicles in real time. The method comprises the steps of calculating projection difference compensation parameters by acquiring flight attitude data and terrain elevation data provided by an airborne multi-mode sensor, geometrically correcting an initial sequence image, fusing the corrected sequence image with a preset splicing template to generate a panoramic image, dynamically adjusting an encoding strategy and a transmission priority of the panoramic image based on the wireless link quality of a synchronous monitoring unmanned aerial vehicle, preferentially packaging an image data block higher than a preset threshold in a generated target encoding data stream into a transmission data packet, and transmitting the transmission data packet to a ground station in a streaming mode through a wireless network. The technical scheme provided by the application not only improves the reliability and accuracy of inspection in the undulating region, but also realizes the whole-course optimization from image acquisition to data transmission.

Inventors

  • WANG KAI

Assignees

  • 中密通(北京)管理咨询有限公司

Dates

Publication Date
20260505
Application Date
20260130

Claims (10)

  1. 1. The unmanned aerial vehicle image real-time splicing and transmitting method is characterized by comprising the following steps of: acquiring flight attitude data and terrain elevation data provided by an airborne multi-mode sensor in the process that the unmanned aerial vehicle flies along a linear infrastructure and acquires initial sequence images; calculating a projection difference compensation parameter according to the flight attitude data and the terrain elevation data; Geometrically correcting the initial sequence image by utilizing the projection difference compensation parameter to obtain a corrected sequence image; fusing the corrected sequence image with a preset splicing template according to the flight attitude data to generate a panoramic image; Synchronously monitoring the wireless link quality of the unmanned aerial vehicle while generating the panoramic image, and dynamically adjusting the coding strategy and the transmission priority of the panoramic image based on the wireless link quality to generate a target coding data stream; And preferentially packaging the image data blocks higher than a preset threshold value in the target coding data stream into transmission data packets, and transmitting the transmission data packets to a ground station in real time in a streaming mode through a wireless network.
  2. 2. The method of claim 1, wherein acquiring the attitude data and terrain elevation data provided by the onboard multi-modal sensor during the unmanned aerial vehicle's flight along the linear infrastructure and acquiring the initial sequence of images comprises: in the process that the unmanned aerial vehicle flies along the linear infrastructure and acquires the initial sequence image, a gesture sensing module and a terrain sensing module are started simultaneously; the attitude sensing module is used for recording angle change information and linear motion information of the unmanned aerial vehicle in a three-dimensional space and generating flight attitude data; Actively transmitting detection signals to the ground surface right below the unmanned aerial vehicle through the terrain sensing module and receiving feedback signals returned from the ground surface; And determining the instantaneous distance between the unmanned aerial vehicle and the ground surface characteristic point based on the detection signal and the feedback signal, and generating terrain elevation data.
  3. 3. The method of claim 1, wherein calculating a projection discrepancy compensating parameter from the flight attitude data and the terrain elevation data comprises: determining a viewing angle variation based on the angle variation information and the linear motion information in the flight attitude data; Constructing a virtual projection plane by utilizing the surface relief form reflected by the terrain elevation data; The visual angle variation is acted on the virtual projection plane, and the deformation trend of the overlapping area of the initial sequence image on the virtual projection plane is deduced; establishing a compensation relation from the visual angle variation to the image pixel position offset aiming at the deformation trend; and calculating geometric transformation parameters according to the compensation relation, and defining the geometric transformation parameters as projection difference compensation parameters.
  4. 4. The method of claim 1, wherein geometrically correcting the initial sequence image using the projection difference compensation parameter to obtain a corrected sequence image comprises: determining a target position of a pixel point in the initial sequence image on a virtual projection plane based on a geometric transformation rule defined by the projection difference compensation parameter; generating a position mapping chart according to the corresponding relation between the target position and the original position of the pixel point; and rearranging pixel points in the initial sequence image according to the azimuth indicated by the position mapping diagram to generate a correction sequence image.
  5. 5. The method of claim 1, wherein fusing the corrected sequence image with a preset stitching template according to the flight attitude data to generate a panoramic image, comprising: According to the linear motion information in the flight attitude data, creating a strip-shaped plane consistent with the trend of the linear infrastructure as a preset splicing template; Placing the correction sequence image on a corresponding position of the preset splicing template, and determining an overlapping part of the correction sequence image and an existing image area on the preset splicing template; Searching an optimal fusion path based on the distribution characteristics of the pixel attribute values of the correction sequence images in the overlapped part; According to the optimal fusion path, mixing pixel attribute values of the corrected sequence images with the existing image areas on the preset splicing templates; and after finishing the steps of placing all the corrected sequence images and mixing the corresponding pixel attribute values on the preset splicing templates, outputting panoramic images.
  6. 6. The method of claim 1, wherein simultaneously with generating the panoramic image, synchronously monitoring a wireless link quality of the drone, and dynamically adjusting a coding strategy and a transmission priority of the panoramic image based on the wireless link quality to generate a target coded data stream, comprising: continuously measuring the signal intensity and the data transmission error rate of a wireless communication link from the unmanned aerial vehicle to the ground station while generating the panoramic image, and generating a wireless link quality evaluation parameter; dividing the panoramic image into a plurality of image data blocks, wherein each image data block corresponds to a different section of a linear infrastructure; Selecting a compression ratio for the panoramic image according to the wireless link quality evaluation parameter, and simultaneously distributing different transmission priorities for the image data blocks; And integrally compressing the panoramic image by using the compression ratio, and sequencing and marking the compressed image data blocks according to the transmission priority to generate a target coding data stream.
  7. 7. The method of claim 1, wherein preferentially encapsulating the image data blocks in the target encoded data stream above a predetermined threshold as transport packets, and transmitting the transport packets in real time in a streaming manner over a wireless network to a ground station, comprises: Reading a transmission priority mark in the target coded data stream; comparing the numerical value of the transmission priority mark with a preset threshold value, and identifying an image data block higher than the preset threshold value, wherein the image data block is marked as a first priority data block; Allocating a priority transmission queue position for the first priority data block, and simultaneously marking the rest image data blocks in the target coded data stream as second priority data blocks and allocating a common transmission queue position; According to the arrangement order of the priority transmission queue position and the common transmission queue position, respectively packaging the first priority image data block and the second priority data block into transmission data packets with different transmission identifications; And according to the sending sequence of the transmission identifiers, the transmission data packets are sequentially sent to the ground station in a streaming mode through the wireless network.
  8. 8. An unmanned aerial vehicle image real-time concatenation and transmission system, characterized by comprising: the acquisition module is used for acquiring flight attitude data and terrain elevation data provided by the airborne multi-mode sensor in the process that the unmanned aerial vehicle flies along the linear infrastructure and acquires the initial sequence images; the calculation module is used for calculating a projection difference compensation parameter according to the flight attitude data and the terrain elevation data; the correction module is used for geometrically correcting the initial sequence image by utilizing the projection difference compensation parameter to obtain a corrected sequence image; the fusion module is used for fusing the correction sequence image with a preset splicing template according to the flight attitude data to generate a panoramic image; The generation module is used for synchronously monitoring the wireless link quality of the unmanned aerial vehicle while generating the panoramic image, and dynamically adjusting the coding strategy and the transmission priority of the panoramic image based on the wireless link quality so as to generate a target coding data stream; And the transmitting module is used for preferentially packaging the image data blocks higher than a preset threshold value in the target coded data stream into transmission data packets, and transmitting the transmission data packets to the ground station in real time in a streaming mode through a wireless network.
  9. 9. The computing device is characterized by comprising a processing component and a storage component, wherein the storage component stores one or more computer instructions, and the one or more computer instructions are used for being invoked and executed by the processing component to realize the unmanned aerial vehicle image real-time splicing and transmission method according to any one of claims 1-7.
  10. 10. A computer storage medium, wherein a computer program is stored, and when the computer program is executed by a computer, the method for real-time image stitching and transmission of an unmanned aerial vehicle according to any one of claims 1 to 7 is realized.

Description

Unmanned aerial vehicle image real-time splicing and transmitting method and system Technical Field The application relates to the technical field of image communication, in particular to an unmanned aerial vehicle image real-time splicing and transmitting method and system. Background In the application scenario that unmanned aerial vehicle carries out automation to linear infrastructure such as electric power line, petroleum pipeline, the operation side urgent need unmanned aerial vehicle can be in the flight in-process, splice into a complete panorama with the sequence image of gathering in real time and promptly return to ground station to ground personnel can grasp the overall state of facility in step, and respond fast to the unusual condition. In the prior art, the collected sequence images are subjected to quick geometric correction and preliminary splicing by relying on pose data provided by a high-precision global positioning system and an inertial measurement unit, meanwhile, the system monitors the signal quality of a wireless link, uniformly adjusts the compression rate of the whole spliced image based on a simple signal intensity threshold, and then transmits a data stream. However, the scheme has inherent defects that the image correction process mainly depends on pose data of an unmanned aerial vehicle, the influence of elevation change of complex topography below a patrol line on the image projection relationship is not fully considered, so that dislocation and distortion are easy to occur to panoramic images spliced in a fluctuation area, the accuracy is insufficient, the scheme is rough in data transmission optimization, the whole image flow is only subjected to uniform compression treatment, intelligent identification cannot be carried out when bandwidth is limited, the image quality of key facility parts such as a power tower and a valve is guaranteed preferentially, important information is possibly lost, and the severe requirement of accurate patrol on details is difficult to meet. Disclosure of Invention The application provides a real-time splicing and transmitting method and a real-time splicing and transmitting system for unmanned aerial vehicle images, which are used for solving the problems that in the prior art, due to the fact that the change of the topography elevation is not considered, panoramic images are spliced and misplaced in an undulating region, and the quality of images of key facility parts cannot be intelligently ensured when the bandwidth is limited in the data transmission process. In a first aspect, the present application provides a method for real-time image stitching and transmission of an unmanned aerial vehicle, including: acquiring flight attitude data and terrain elevation data provided by an airborne multi-mode sensor in the process that the unmanned aerial vehicle flies along a linear infrastructure and acquires initial sequence images; calculating a projection difference compensation parameter according to the flight attitude data and the terrain elevation data; Geometrically correcting the initial sequence image by utilizing the projection difference compensation parameter to obtain a corrected sequence image; fusing the corrected sequence image with a preset splicing template according to the flight attitude data to generate a panoramic image; Synchronously monitoring the wireless link quality of the unmanned aerial vehicle while generating the panoramic image, and dynamically adjusting the coding strategy and the transmission priority of the panoramic image based on the wireless link quality to generate a target coding data stream; And preferentially packaging the image data blocks higher than a preset threshold value in the target coding data stream into transmission data packets, and transmitting the transmission data packets to a ground station in real time in a streaming mode through a wireless network. Optionally, during the process of flying the unmanned aerial vehicle along the linear infrastructure and acquiring the initial sequence image, acquiring the flying attitude data and the terrain elevation data provided by the onboard multi-modal sensor includes: in the process that the unmanned aerial vehicle flies along the linear infrastructure and acquires the initial sequence image, a gesture sensing module and a terrain sensing module are started simultaneously; the attitude sensing module is used for recording angle change information and linear motion information of the unmanned aerial vehicle in a three-dimensional space and generating flight attitude data; Actively transmitting detection signals to the ground surface right below the unmanned aerial vehicle through the terrain sensing module and receiving feedback signals returned from the ground surface; And determining the instantaneous distance between the unmanned aerial vehicle and the ground surface characteristic point based on the detection signal and the feedback signal, and gene