CN-121982595-A - Unmanned aerial vehicle sensing method and system
Abstract
The application discloses an unmanned aerial vehicle sensing method and system, wherein the unmanned aerial vehicle sensing method comprises the steps of obtaining radio frequency in-phase orthogonal signals and video frame buffer queues of a target area in a preset time period, determining a middle acquisition time value of each radio frequency fragment under the condition that the radio frequency in-phase orthogonal signals are segmented into a plurality of radio frequency fragments according to a preset time window length, determining a pairing data set based on the middle acquisition time value, determining a visual detection confidence value of each video frame in the pairing data set based on a trained visual detection model, determining a background noise probability value of each radio frequency fragment in the pairing data set based on a trained CNN-transporter neural network model, and determining an unmanned aerial vehicle sensing result of each pairing data based on the visual detection confidence value and the background noise probability value.
Inventors
- WU FAN
- ZHANG ZHIJIAN
- LI HONGXING
- JIANG JUNJIE
- LV FENG
Assignees
- 中南大学
Dates
- Publication Date
- 20260505
- Application Date
- 20260407
Claims (10)
- 1. The unmanned aerial vehicle sensing method is characterized by comprising the following steps of: Acquiring a radio frequency in-phase quadrature signal and a video frame buffer queue of a target area in a preset time period; under the condition that the radio frequency in-phase and quadrature signals are segmented into a plurality of radio frequency fragments according to a preset time window length, determining the middle acquisition time value of each radio frequency fragment; Determining a pairing data set based on the intermediate acquisition time value, wherein each pairing data set in the pairing data set comprises a radio frequency fragment and a frame of video frame in the video frame buffer queue, and the radio frequency fragments in the pairing data set and the video frames in the pairing data set are in one-to-one correspondence; Determining a background noise probability value of each radio frequency fragment in the paired data set based on a trained CNN-transducer neural network model; And determining an unmanned aerial vehicle perception result of each pair of data based on the visual detection confidence value and the background noise probability value.
- 2. The unmanned aerial vehicle perception method of claim 1, wherein the determining the intermediate acquisition time value for each of the radio frequency fragments comprises: Acquiring a starting time value of each radio frequency fragment; Dividing the preset time window length by two to obtain an intermediate time value; and adding the starting time value and the intermediate time value to obtain an intermediate acquisition time value of each radio frequency fragment.
- 3. The unmanned aerial vehicle perception method of claim 1, wherein the determining a pairing dataset based on the intermediate acquisition time value comprises: Acquiring a generation time value of each frame of the video frame in the video frame buffer queue; Determining the absolute value of the time deviation between each radio frequency fragment and each video frame in the video frame buffer queue based on the generated time value and the intermediate acquisition time value; the paired data set is determined based on the time offset absolute value.
- 4. The drone awareness method of claim 1, wherein the determining the drone awareness result for each of the paired data based on the visual detection confidence value and the background noise probability value comprises: subtracting the background noise probability value to obtain a target existence probability value of each pair of data; determining a cross-modal confidence value of each paired data based on a preset noise suppression index, the background noise probability value and the visual detection confidence value; determining a collaborative confidence value of each pair of data based on a preset blind zone compensation factor, the cross-modal confidence value and the target existence probability value; And determining an unmanned aerial vehicle perception result of each paired data based on the visual detection confidence value and the cooperative confidence value.
- 5. The unmanned aerial vehicle perception method of claim 4, wherein the determining the collaborative confidence value for each of the paired data based on the preset dead zone compensation factor, the cross-modal confidence value, and the target presence probability value comprises: Multiplying the preset dead zone compensation factor by the target existence probability value to obtain a product value of each pair of data; Adding the cross-modal confidence value and the product value to obtain a first sum value of each pair of data; adding one to the preset dead zone compensation factor to obtain a second sum value of each pair of data; Dividing the first sum by the second sum to obtain the collaborative confidence value of each paired data.
- 6. The unmanned aerial vehicle perception method of claim 4, wherein the unmanned aerial vehicle perception results comprise a precision confirmation mode, a false alarm suppression mode, and a dead zone early warning mode, wherein the determining the unmanned aerial vehicle perception result for each of the paired data based on the visual detection confidence value and the collaborative confidence value comprises: screening all the paired data with the visual detection confidence coefficient value larger than or equal to a first preset early warning threshold value and the cooperative confidence coefficient value larger than or equal to a second preset early warning threshold value from the paired data set to be used as first paired data; taking the accurate confirmation mode as the unmanned aerial vehicle sensing result of the first pairing data; Screening all the paired data with the visual detection confidence coefficient value larger than or equal to a first preset early warning threshold value and the cooperative confidence coefficient value smaller than a second preset early warning threshold value from the paired data set to be used as second paired data; taking the false alarm suppression mode as the unmanned aerial vehicle sensing result of the second pairing data; screening all the paired data with the visual detection confidence coefficient value smaller than the first preset early warning threshold value and the cooperative confidence coefficient value larger than or equal to the second preset early warning threshold value from the paired data set to be used as third paired data; And taking the blind area early warning mode as the unmanned aerial vehicle sensing result of the third pairing data.
- 7. The unmanned aerial vehicle perception method of claim 1, wherein the training process of the trained visual inspection model comprises: Acquiring a training data set, wherein the training data set comprises a plurality of training video frames and real frame center space coordinates of each training video frame; Constructing an initial visual detection model, and inputting the training data set into the initial visual detection model to obtain a plurality of prediction frame central space coordinates of each frame of the training video frame predicted by the initial visual detection model and visual detection confidence values corresponding to the prediction frame central space coordinates; Determining a first loss value based on all the prediction frame central space coordinates, visual detection confidence values corresponding to the prediction frame central space coordinates and the real frame central space coordinates; and under the condition that the first loss value is smaller than a preset loss threshold value, taking the initial visual detection model as the trained visual detection model.
- 8. The unmanned aerial vehicle perception method of claim 1, wherein the determining the background noise probability value for each radio frequency fragment in the paired dataset based on the trained CNN-fransformer neural network model comprises: Performing short-time Fourier transform on each radio frequency fragment in the pairing data set to obtain a two-dimensional time-frequency spectrogram corresponding to each radio frequency fragment; under the condition of acquiring the time width of each two-dimensional time-frequency spectrogram, determining the slicing step length of each two-dimensional time-frequency spectrogram based on the time width and a preset sequence length value; And under the condition that the two-dimensional time-frequency spectrograms are sliced according to the slicing step length to obtain a plurality of sliced time-frequency spectrograms, inputting all sliced time-frequency spectrograms of each two-dimensional time-frequency spectrogram into the trained CNN-transducer neural network model to obtain the background noise probability value of each radio frequency fragment in the paired data set output by the trained CNN-transducer neural network model.
- 9. The method for sensing a drone according to claim 1, wherein the acquiring the radio frequency in-phase quadrature signal and the video frame buffer queue of the target area within the preset time period comprises: Acquiring the radio frequency in-phase and quadrature signals of the target area in the preset time period through a radio frequency antenna array; And acquiring the video frame buffer queue of the target area in the preset time period through a photoelectric holder.
- 10. An unmanned aerial vehicle perception system, characterized in that the unmanned aerial vehicle perception system comprises: The data acquisition module is used for acquiring radio frequency in-phase and quadrature signals and a video frame buffer queue of a target area in a preset time period; The middle acquisition time value determining module is used for determining the middle acquisition time value of each radio frequency fragment under the condition that the radio frequency in-phase and quadrature signals are segmented into a plurality of radio frequency fragments according to a preset time window length; The pairing data set determining module is used for determining a pairing data set based on the intermediate acquisition time value, wherein each pairing data set in the pairing data set comprises a radio frequency fragment and a frame of video frame in the video frame buffer queue, and the radio frequency fragments in the pairing data set correspond to the video frames in the pairing data set one by one; The model prediction module is used for determining a visual detection confidence value of each video frame in the paired data set based on a trained visual detection model; and the unmanned aerial vehicle perception result determining module is used for determining unmanned aerial vehicle perception results of each pair of data based on the visual detection confidence value and the background noise probability value.
Description
Unmanned aerial vehicle sensing method and system Technical Field The application relates to the technical field of unmanned aerial vehicle perception, in particular to an unmanned aerial vehicle perception method and system. Background With the popularization of unmanned aerial vehicle technology, detection and identification of non-cooperative micro unmanned aerial vehicle targets have become a core problem in maintaining low-altitude security. However, in the existing detection and identification method, under the complex background of the city (eave, leaves or bird and other high-contrast clutters), the visual sensor only collects the reflection spectrum at the physical level, and cannot sense the electromagnetic radiation characteristic, so that the false alarm rate is high. Disclosure of Invention The present application aims to at least solve the technical problems existing in the prior art. Therefore, the application provides the unmanned aerial vehicle sensing method and the unmanned aerial vehicle sensing system, which can improve the accuracy of unmanned aerial vehicle sensing and reduce the false alarm rate of unmanned aerial vehicle sensing. In a first aspect of the present application, an unmanned aerial vehicle sensing method is provided, including the steps of: Acquiring a radio frequency in-phase quadrature signal and a video frame buffer queue of a target area in a preset time period; under the condition that the radio frequency in-phase and quadrature signals are segmented into a plurality of radio frequency fragments according to a preset time window length, determining the middle acquisition time value of each radio frequency fragment; Determining a pairing data set based on the intermediate acquisition time value, wherein each pairing data set in the pairing data set comprises a radio frequency fragment and a frame of video frame in the video frame buffer queue, and the radio frequency fragments in the pairing data set and the video frames in the pairing data set are in one-to-one correspondence; Determining a background noise probability value of each radio frequency fragment in the paired data set based on a trained CNN-transducer neural network model; And determining an unmanned aerial vehicle perception result of each pair of data based on the visual detection confidence value and the background noise probability value. The unmanned aerial vehicle sensing method provided by the embodiment of the application has at least the following beneficial effects: According to the application, the radio frequency in-phase orthogonal signal and the video frame buffer queue of the target area are obtained, and are time aligned to form the paired data set, then the visual detection confidence coefficient value of the video frame is determined based on the trained visual detection model, meanwhile, the background noise probability value of the radio frequency fragment is determined based on the trained CNN-transducer neural network model, finally, the unmanned plane sensing result of each paired data is determined by comprehensively utilizing the visual detection confidence coefficient value and the background noise probability value, so that the unmanned plane sensing accuracy is improved, and the false alarm rate is reduced. According to some embodiments of the application, the determining the intermediate acquisition time value of each of the radio frequency fragments includes: Acquiring a starting time value of each radio frequency fragment; Dividing the preset time window length by two to obtain an intermediate time value; and adding the starting time value and the intermediate time value to obtain an intermediate acquisition time value of each radio frequency fragment. According to some embodiments of the application, the determining a paired dataset based on the intermediate acquisition time value comprises: Acquiring a generation time value of each frame of the video frame in the video frame buffer queue; Determining the absolute value of the time deviation between each radio frequency fragment and each video frame in the video frame buffer queue based on the generated time value and the intermediate acquisition time value; the paired data set is determined based on the time offset absolute value. According to some embodiments of the application, the determining the unmanned aerial vehicle perception result of each paired data based on the visual detection confidence value and the background noise probability value includes: subtracting the background noise probability value to obtain a target existence probability value of each pair of data; determining a cross-modal confidence value of each paired data based on a preset noise suppression index, the background noise probability value and the visual detection confidence value; determining a collaborative confidence value of each pair of data based on a preset blind zone compensation factor, the cross-modal confidence value and the target