Search

CN-121980516-A - Illegal unmanned aerial vehicle tracing method and system based on fusion of visible light, thermal imaging and radio frequency fingerprint

CN121980516ACN 121980516 ACN121980516 ACN 121980516ACN-121980516-A

Abstract

The invention discloses an illegal unmanned aerial vehicle tracing method and system based on fusion of visible light, thermal imaging and radio frequency fingerprints. The method comprises the steps of synchronously collecting visible light, thermal infrared images and radio frequency signals of a target, extracting appearance, thermal radiation and radio frequency fingerprint characteristics in parallel, dynamically weighting and fusing multi-mode characteristics through a self-adaptive cross-mode attention fusion network (ACMAF-Net), completing the identification of a model number of the unmanned aerial vehicle and the association of the model number with an individual, reconstructing three-dimensional tracks of the unmanned aerial vehicle by utilizing multi-sensor observation data, and deducing flying points and possible positions of operators by combining track characteristics and geographic information. The system comprises subsystems of multi-mode sensing, data processing and fusion, traceability analysis, display control storage and the like. The invention overcomes the limitation of a single technology through the complementary fusion of three modes, realizes all-weather and high-precision detection and identification and effective traceability of an illegal unmanned aerial vehicle, and remarkably improves the low-altitude security and protection capability.

Inventors

  • LU CHAO
  • JIANG YUAN
  • Lu Ruogu
  • HAN TUANJUN
  • Yang Chuanghua
  • Tian Lixuan
  • HAN BING
  • CHEN GENGWU

Assignees

  • 陕西理工大学

Dates

Publication Date
20260505
Application Date
20260131

Claims (9)

  1. 1. An illegal unmanned aerial vehicle tracing method based on fusion of visible light, thermal imaging and radio frequency fingerprints is characterized by comprising the following steps: s1, synchronously collecting a visible light image, a thermal infrared image and a radio frequency signal of a target area; s2, extracting visual appearance feature vectors of the unmanned aerial vehicle from the visible light image, the thermal infrared image and the radio frequency signal respectively Characteristic vector of heat radiation distribution And radio frequency fingerprint feature vector ; S3, the feature vector is processed 、 And Inputting the fusion characteristics into a self-adaptive cross-modal attention fusion network to obtain fusion characteristics Based on the fusion characteristics, detection and identity recognition of the unmanned aerial vehicle are completed; S4, reconstructing a three-dimensional flight track of the unmanned aerial vehicle based on the multi-sensor observation data; S5, deducing the flying spot of the unmanned aerial vehicle and the possible position area of an operator based on the three-dimensional flying track and combined with geographic information system data; and S6, generating a multi-mode comprehensive traceability report containing unmanned aerial vehicle body information, flight trajectories, departure points and operator areas.
  2. 2. The method according to claim 1, wherein in step S2, the extracting the RF fingerprint feature vector includes extracting transient features of the RF signal after preprocessing the RF signal, including extracting time constants based on envelope rising edge exponential fitting: Extracting steady state features includes calculating an error vector magnitude And extracting depth features by using a one-dimensional convolutional neural network, and fusing and dimension-reducing the multiple types of features to form the radio frequency fingerprint feature vector 。
  3. 3. The method according to claim 1, wherein in step S3, the adaptive cross-modal attention fusion network passes through a cross-attention mechanism The correlation among the characteristics of different modes is calculated, and according to the real-time environment information and the recognition confidence coefficient of each mode, a weight formula is adopted And dynamically generating fusion weights, and carrying out weighted fusion.
  4. 4. The method according to claim 1, wherein step S4 comprises using the time difference of arrival TDOA observation information obtained by the plurality of monitoring nodes by a distance difference equation And establishing a geometric positioning model to calculate the position of the unmanned aerial vehicle, adopting an Extended Kalman Filter (EKF) algorithm, and continuously estimating and smoothing the position and speed state of the unmanned aerial vehicle through a state prediction and measurement update recursion equation so as to reconstruct the three-dimensional flight track.
  5. 5. The method according to claim 1, wherein in the step S5, the inferred flying spot is specifically that the most probable point meeting the initial motion characteristic of the track is found in a flat ground area constrained by the digital elevation model DEM based on the starting point of the reconstructed track, and the inferred operator position area is specifically that a maximum likelihood model taking the operator position as a hidden variable is established, the model fuses the flying track mode and the topography shielding sight constraint, and the area which maximizes the probability of the track condition is searched in the potential sight.
  6. 6. An illegal unmanned aerial vehicle traceability system for implementing the method of any of claims 1-5, comprising: The multi-mode sensing unit comprises a visible light camera, a thermal imaging camera and software defined radio SDR equipment and is used for synchronously acquiring data; the edge computing processing unit is connected with the multi-mode sensing unit and is used for running image processing, radio frequency signal processing and feature extraction algorithms; The central processing and fusion unit is used for running the self-adaptive cross-modal attention fusion network, the track reconstruction algorithm and the traceability inference model; The data storage and display control unit is used for storing the radio frequency fingerprint database and the historical data and providing a man-machine interaction interface and early warning information output.
  7. 7. The system of claim 6, wherein the multi-modal sensing unit and the edge computing processing unit are integrated into a single integrated sensor tower comprising a protective housing, a top mounted radio frequency antenna array and weather sensor, a tri-axial stabilized cradle head with a mid-section carrying a visible light and thermal imaging camera, and a bottom built-in computing and communication module pod.
  8. 8. The system of claim 6, wherein the system software architecture is built based on a robot operating system ROS2, and a distributed microservice design is adopted, and the system software architecture includes a device driving node, a sensing algorithm node, a fusion decision node, and a man-machine interaction node, where the nodes communicate through a data distribution service DDS.
  9. 9. The system of claim 6, further comprising a collaborative monitoring network formed by a plurality of nodes of the traceability system, wherein time synchronization and data communication are performed between the nodes through a wired or wireless network, and positioning and track tracking of the unmanned aerial vehicle in a wide area range are completed together.

Description

Illegal unmanned aerial vehicle tracing method and system based on fusion of visible light, thermal imaging and radio frequency fingerprint Technical Field The invention belongs to the technical field of low-altitude security and target monitoring, and particularly relates to an illegal unmanned aerial vehicle detection, identification and tracing system and method integrating multi-mode sensing information. Background Along with the rapid popularization of unmanned aerial vehicle technology, the unmanned aerial vehicle technology is increasingly widely applied to the fields of logistics, mapping, entertainment and the like, but simultaneously brings serious problems of illegal invasion, privacy invasion, public safety threat and the like. Sensitive areas such as airports, military bases, nuclear power stations and the like are frequently affected by illegal unmanned aerial vehicles, and efficient and accurate monitoring and countermeasures are needed. At present, the monitoring technology for unmanned aerial vehicle mainly includes: 1. Radar detection, namely, low altitude, slow speed and small target ('low speed and small') are limited in detection capability, high in false alarm rate and incapable of identifying the model and the identity of the target. 2. And radio spectrum detection, namely detecting and identifying communication signals between the unmanned aerial vehicle and the remote controller. The disadvantage is that unmanned aerial vehicles flying or signal silencing on a preset route are not effective, and the recognition accuracy is easily disturbed by the environment. 3. And (3) acoustic detection, namely identifying by utilizing special voiceprints of the unmanned aerial vehicle rotor wing. Short acting distance and sharply reduced performance in complex urban noise environment. 4. Photoelectric detection: visible light imaging, namely, depending on ambient illumination, failure under night, haze and strong backlight conditions. Infrared thermal imaging, which can work at night, but has limited detection distance to low-temperature targets, low image resolution and difficult fine-grained identification. The single monitoring technology is limited by the physical principle, short plates which are difficult to overcome exist, and the comprehensive monitoring requirements of all weather, high precision and traceability on an illegal unmanned aerial vehicle in a complex environment cannot be met. Therefore, how to realize the effective fusion of the multi-source heterogeneous information and form the complementary advantages becomes the key for improving the overall performance of the system. Disclosure of Invention Object of the invention The invention aims to overcome the defects of the prior art and provides an illegal unmanned aerial vehicle tracing method and system based on multi-mode fusion of visible light, thermal imaging and radio frequency fingerprints. According to the method, through deep fusion and intelligent processing of heterogeneous sensor information, reliable detection, accurate identification and track tracking of an illegal unmanned aerial vehicle are achieved, and finally, a flying spot and a possible position of an operator are traced to form a complete electronic evidence chain. The core algorithm processing flow of the invention can be abstractly expressed as the following functional relationship: , wherein, Respectively representing a visible light image, a thermal infrared image and a radio frequency signal which are synchronously acquired; extracting functions for the characteristics of each mode respectively; the method is a self-adaptive cross-modal feature fusion function; and (5) reconstructing and tracing the trace to infer functions. The framework constitutes a complete technical chain of advanced decisions perceived from multiple sources. (II) technical scheme In order to achieve the above purpose, the technical scheme adopted by the invention is as follows: An illegal unmanned aerial vehicle tracing method based on fusion of visible light, thermal imaging and radio frequency fingerprints is characterized by comprising the following steps: S1, synchronous acquisition of multi-mode data And synchronously acquiring a visible light image, a thermal infrared image and a radio frequency signal of the target by using sensing equipment arranged at the monitoring point. S2, multi-modal feature parallel extraction S2.1, preprocessing and enhancing the visible light image, positioning the unmanned aerial vehicle by adopting a target detection network based on deep learning, and extracting the characteristic vector of the external appearance structure. The target detection network adopts an improved YOLOv-tiny architecture, and the loss function of the target detection networkThe definition is as follows: Wherein, the Indicate the firstWhether the individual grid contains a target or not,To predict the center coordinates and width and height of the bounding box,In o