Search

CN-121353960-B - Horizontal panoramic unmanned aerial vehicle detection method and system based on rotation event camera

CN121353960BCN 121353960 BCN121353960 BCN 121353960BCN-121353960-B

Abstract

The invention discloses a horizontal panoramic unmanned aerial vehicle detection method and a horizontal panoramic unmanned aerial vehicle detection system based on a rotary event camera, wherein the method comprises the steps of obtaining an event stream containing unmanned aerial vehicle information, which is shot by the event camera arranged on a rotary platform; the method comprises the steps of preprocessing an event stream into a plurality of event groups in a continuous fixed time interval, generating images into class image representations for each event group, inputting the class image representations of all the event groups into an input feature obtained by construction, inputting the input feature into a pre-trained time-space fusion detection network to obtain a detection result, wherein the detection result comprises a detection frame and a target confidence coefficient of an unmanned aerial vehicle target, and carrying out azimuth estimation according to the detection frame of the unmanned aerial vehicle target and the gesture of an event camera on a rotary platform to obtain the relative azimuth angle of the unmanned aerial vehicle. The invention aims to provide an omnibearing (360 DEG) horizontal field angle for an event camera, realize real-time and reliable detection and azimuth estimation of the unmanned aerial vehicle, and meet the actual requirements in a dynamic deployment scene.

Inventors

  • ZHOU YI
  • DAI KUAN
  • ZHANG HONGXIN

Assignees

  • 湖南大学

Dates

Publication Date
20260505
Application Date
20251216

Claims (9)

  1. 1. The horizontal panoramic unmanned aerial vehicle detection method based on the rotation event camera is characterized by comprising the following steps of: S101, acquiring an event stream containing unmanned aerial vehicle information, which is shot by an event camera installed on a rotary platform; s102, preprocessing an event stream, wherein the event stream is divided into a plurality of event groups in a continuous fixed time interval; s103, generating an image for each event group to obtain a class image representation; S104, carrying out input construction on class image representations of all event groups to obtain input features; s105, inputting the input features into a pre-trained space-time fusion detection network to obtain a detection result, wherein the detection result comprises a detection frame of an unmanned aerial vehicle target and a target confidence coefficient; s106, carrying out azimuth estimation according to the detection frame of the unmanned aerial vehicle target and the gesture of the event camera on the rotary platform to obtain the relative azimuth angle of the unmanned aerial vehicle Relative azimuth angle of the unmanned aerial vehicle Comprising a horizontal angle And pitch angle Comprising: S201, the barycenter pixel coordinates of the detection frame A normalized vector projected onto a normalized camera coordinate system according to: ; Wherein, the In order to normalize the vector values, As an internal reference matrix for the event camera, Is that Is used for the inverse matrix of (a), Is that Is to be used in the present invention, Barycenter pixel coordinates for the detection frame Converting to three-dimensional coordinates; s202, normalizing the vector by combining the gesture of the event camera on the rotating platform Conversion to a rotating platform coordinate system: ; Wherein, the To convert to a normalized vector in the rotating platform coordinate system, 、 And Respectively normalized vectors Components in the x, y and z axis directions, For a transient rotation matrix of the rotation platform relative to the zero direction, A rotation matrix of tilt angles is fixed for the event camera, To rotate the angle of the event camera on the platform relative to the zero direction, , In order to detect the moment of the drone, To the moment the rotating platform rotates to the zero position, For the angular velocity of rotation of the rotating platform, A tilt angle fixed for the event camera; S203, calculating the relative azimuth angle of the unmanned aerial vehicle obtained by azimuth estimation according to the following formula : ; ; Wherein, the Is a horizontal angle, and is provided with a plurality of grooves, Is used as a pitch angle of the light beam, 、 And Respectively normalized vectors Components in the x, y and z axis directions.
  2. 2. The horizontal panorama unmanned aerial vehicle detection method according to claim 1, wherein when the class image representations of all the event groups are input-constructed to obtain the input features in step S104, the obtained input features comprise a spatial input which is a class image representation of the first event group and a time-series input which is a time series constituted by the class image representations of all the event groups.
  3. 3. The horizontal panorama unmanned aerial vehicle detection method based on the rotation event camera according to claim 2, wherein the spatio-temporal fusion detection network in step S105 comprises a detection backbone network for performing feature extraction on the spatial input, a spatio-temporal fusion module STFM for performing feature extraction on the temporal input, a connection module STFM for connecting output features of the backbone network, the spatio-temporal fusion module STFM and then inputting the connected features to a detection head for detecting the input features to obtain a detection result.
  4. 4. The horizontal panorama unmanned aerial vehicle detection method based on a rotation event camera according to claim 3, wherein the space-time fusion module STFM comprises a frame feature extractor, a gating network, a multiplication module and a motion context encoder, the frame feature extractor is used for extracting frame features of each frame type image representation in the time sequence input, the gating network is used for calculating weights of each frame type image representation according to features extracted by the frame feature extractor, the multiplication module is used for firstly performing global averaging pooling on a time sequence feature sequence extracted by the frame feature extractor to generate a global channel state descriptor, the channel state descriptor summarizes integral channel information of the time sequence feature sequence, then the channel state descriptor is sent to an excitation network, the excitation network is composed of two full-connection layers and a nonlinear activation layer, the global channel descriptor is processed and learned through the excitation network to generate a group of dynamic channel attention weights used for representing importance of each feature channel, the multiplication module is used for multiplying the time sequence feature sequence extracted by the frame feature extractor, the gating network by the gate attention weights to obtain a time sequence feature sequence, the time sequence is obtained through a time sequence of the time sequence feature sequence, and the time sequence is calibrated by a time sequence unit, and the time sequence is calibrated by a time sequence, and the time sequence is obtained through a time sequence of the time sequence.
  5. 5. The horizontal panorama unmanned aerial vehicle detection method based on a rotation event camera according to claim 1, wherein the function expression of the loss function adopted by the spatio-temporal fusion detection network during training is: ; Wherein, the As a loss function, where In order to be the cross-ratio loss term, As a center point distance loss term, For the aspect ratio penalty term, To detect the intersection ratio of the frame and the real frame, Is the Euclidean distance, and the distance between the two electrodes is the Euclidean distance, And The coordinates of the center points of the detection frame and the real frame are respectively; And The width and the height of the detection frame are respectively; And The width and the height of the real frame are respectively; And The width and the height of the smallest circumscribed rectangle which can simultaneously surround the detection frame and the real frame are respectively.
  6. 6. A horizontal panorama unmanned aerial vehicle detecting system based on a rotation event camera, which comprises an event camera, an edge calculating unit, a photoelectric switch, a stepping motor and a rotating platform driven by the stepping motor, wherein the event camera, the edge calculating unit and the photoelectric switch are installed on the rotating platform, a shading baffle used for being matched with the photoelectric switch to realize position detection is fixed on the stepping motor side, when the rotating platform rotates to the position of the photoelectric switch aligned shading baffle, the photoelectric switch sends a different level signal to the event camera compared with the position of the photoelectric switch not aligned shading baffle, so that the event camera records the current moment as the moment when the rotating platform rotates to a zero point position, the event camera and the edge calculating unit are connected with each other, and the edge calculating unit is programmed or configured to execute the horizontal panorama unmanned aerial vehicle detecting method based on the rotation event camera according to any one of claims 1-5.
  7. 7. A rotary event camera based horizontal panorama unmanned aerial vehicle detection system comprising a microprocessor and a memory interconnected, wherein the microprocessor is programmed or configured to perform the rotary event camera based horizontal panorama unmanned aerial vehicle detection method of any one of claims 1-5.
  8. 8. A computer readable storage medium having stored therein a computer program or instructions, characterized in that the computer program or instructions is programmed or configured to execute the method for detecting a horizontal panoramic unmanned aerial vehicle based on a rotational event camera according to any one of claims 1 to 5 by a processor.
  9. 9. A computer program product comprising a computer program or instructions, characterized in that the computer program or instructions is programmed or configured to execute the method for detecting a horizontal panoramic unmanned aerial vehicle based on a rotational event camera according to any one of claims 1 to 5 by a processor.

Description

Horizontal panoramic unmanned aerial vehicle detection method and system based on rotation event camera Technical Field The invention relates to the technical field of unmanned aerial vehicle detection, in particular to a horizontal panoramic unmanned aerial vehicle detection method and system based on a rotation event camera. Background With the wide application of unmanned aerial vehicles (Drone) in various fields, public safety and privacy risks brought by the unmanned aerial vehicles are increased, so that unmanned aerial vehicle detection technology becomes an important research direction. The conventional passive detection scheme based on the frame camera is fundamentally limited in performance in the face of a high-speed moving object or unfavorable illumination conditions. To address these challenges, the present invention introduces an event camera (EVENT CAMERA). The event camera is a novel bionic sensor, and the working principle of the event camera is different from that of the traditional camera, but the event camera outputs an event by asynchronously detecting brightness change of every pixel. Each event is an output after the pixel brightness change is accumulated to reach a certain threshold value, and comprises three elements of a time stamp, a pixel coordinate and a polarity, and is output in a mode of an event stream (EVENTS STREAM) (when a large number of pixels change due to object movement or illumination change in a scene, a series of events are generated, and the events are called event stream). The mechanism gives the characteristics of microsecond response speed, high Dynamic Range (HDR), extremely low power consumption and the like to the event camera, so that the mechanism is particularly suitable for realizing the robust detection of a fast moving target under the condition that the traditional camera is difficult to be qualified. However, existing event camera based solutions have the common drawback that, first of all, they mostly assume that the cameras are statically deployed, which greatly limits their application on moving carriers (e.g. robots, vehicles) as the carrier's own motion (Ego-motion) can generate a lot of background noise, severely interfering with target recognition. Secondly, the limited field of view (FoV) inherent to event cameras severely constrains the detection range, making unmanned aerial vehicles with unpredictable trajectories very prone to fly out of the camera's range of view, leading to detection failure and tracking disruption. Disclosure of Invention The invention aims to solve the technical problems in the prior art and provides a horizontal panoramic unmanned aerial vehicle detection method and system based on a rotation event camera, which aims to provide an omnibearing (360 DEG) horizontal field angle for the event camera, realize real-time and reliable detection and azimuth estimation of the unmanned aerial vehicle and meet the actual requirements under a dynamic deployment scene. In order to solve the technical problems, the invention adopts the following technical scheme: a horizontal panorama unmanned aerial vehicle detection method based on a rotation event camera comprises the following steps: S101, acquiring an event stream containing unmanned aerial vehicle information, which is shot by an event camera installed on a rotary platform; s102, preprocessing an event stream, wherein the event stream is divided into a plurality of event groups in a continuous fixed time interval; s103, generating an image for each event group to obtain a class image representation; S104, carrying out input construction on class image representations of all event groups to obtain input features; s105, inputting the input features into a pre-trained space-time fusion detection network to obtain a detection result, wherein the detection result comprises a detection frame of an unmanned aerial vehicle target and a target confidence coefficient; s106, carrying out azimuth estimation according to the detection frame of the unmanned aerial vehicle target and the gesture of the event camera on the rotary platform to obtain the relative azimuth angle of the unmanned aerial vehicle Relative azimuth angle of the unmanned aerial vehicleComprising a horizontal angleAnd pitch angle。 Optionally, when the class image representations of all the event groups are input and constructed to obtain the input feature in step S104, the obtained input feature includes a spatial input and a time sequence input, where the spatial input is the class image representation of the first event group, and the time sequence input is a time sequence formed by the class image representations of all the event groups. Optionally, the space-time fusion detection network in step S105 includes a detection backbone network, a space-time fusion module STFM, a connection module, and a detection head, where the detection backbone network is used to perform feature extraction on the spatial input, the s