Search

EP-4736130-A1 - METHOD AND SYSTEM FOR TARGET DETECTION AND CLASSIFICATION TO AID DRONE DEFENCE SYSTEMS

EP4736130A1EP 4736130 A1EP4736130 A1EP 4736130A1EP-4736130-A1

Abstract

A model-based artificial intelligence (Al) system and method that applies trained decision- making on extracted features for target classification, including at least one sensor for detecting a point target, an interactive multiple model (IMM) filter for extracting features related to motion kinematics of the point target, and a classifier for applying trained decision-making on the extracted features to determine classification of the point target

Inventors

  • MANTEGH, Iraj
  • BOLIC, MIODRAG
  • MEHTA, Varunkumar
  • VIDAL, CHARLES

Assignees

  • National Research Council of Canada

Dates

Publication Date
20260506
Application Date
20240627

Claims (20)

  1. 1 . A system for target classification, comprising: at least one sensor for detecting a point target; an interactive multiple model (IMM) filter for extracting features related to motion kinematics of the point target; and a classifier for applying trained decision-making on the extracted features to determine classification of the point target.
  2. 2. The system of claim 1 , further including a camera configured to be focused on the point target by the classifier for visual verification of the classification and in response updating classification accuracy of the classifier.
  3. 3. The system of claim 2, wherein the camera is a pan-tilt-zoom camera.
  4. 4. The system of claim 1 , wherein the extract features include velocity, acceleration and turn angle.
  5. 5. The system of claim 1 , wherein the sensor is one of a radar sensor, electro-optical sensor, LIDAR sensor or camera.
  6. 6. The system of claim 1 , wherein the interactive multiple model (IMM) filter switches between multiple motion models representing the features related to motion kinematics of the point target based upon probability.
  7. 7. The system of claim 6, wherein the multiple motion models include Constant Velocity (CV), Constant Acceleration (CA), Horizontal Coordinated Turn (HCT) and 3D Coordinated Turn (3DCT).
  8. 8. The system of claim 7, wherein the interactive multiple model (IMM) filter includes a plurality of filters (FILTER 1 , FILTER 2... FILTER N) that are matched with respective motion models CV, CA, HCT, and 3DCT for estimating the maneuvering state of the point target at successive sampling instances, a mixer for mixing state estimates provided by plurality of filters (FILTER 1 , FILTER 2... FILTER N) from a previous sampling instance to set the initial conditions for each of the plurality of filters that is a most suitable model at a current sampling instance, and a combiner that combines state mean m(k) and covariance P(k) and outputs the extracted features.
  9. 9. The system of claim 4, wherein the classifier trains a decision tree for discriminating between UAVs, birds and ground targets using the features extracted by the interactive multiple model (IMM) filter to determine classification of the point target.
  10. 10. The system of claim 8, wherein the decision tree comprises a first stage tree that uses the motion kinematics of the point target to distinguish between airborne targets and ground, and a second stage tree that splits each node/leaf such that each sub-branch models specific maneuvers of the point target based on the extracted features.
  11. 11 . The system of claim 10, wherein the maneuvers include curvilinear flight, hovering and straight flight.
  12. 12. The system of claim 11 , wherein curvilinear flight further is modelled by loops, 3D turns and 2D turns and straight flight is modelled by elevation, speed, and curvature.
  13. 13. The system of claim 9, wherein the output of the classifier is a matrix with each row corresponding to a node and columns corresponding to node number, positive child node number, negative child node number, function used, split value, data size, and majority class, such for a given trajectory of the target object, classification is performed by checking the condition at each node of the second stage tree and following the resulting branches until a leaf node is reached, wherein the leaf node provides a label for classifying the target object as one of either a bird or UAV.
  14. 14. A method for target classification, comprising: detecting a point target; extracting features related to motion kinematics of the point target using an interactive multiple model (IMM) filter; and applying trained decision-making on the extracted features to determine classification of the point target.
  15. 15. The method of claim 14, further including focusing on the point target for visual verification of the classification and in response updating classification accuracy.
  16. 16. The method of claim 14, wherein the extract features include velocity, acceleration and turn angle.
  17. 17. The method of claim 14, wherein the interactive multiple model (IMM) filter switches between multiple motion models representing the features related to motion kinematics of the point target based upon probability.
  18. 18. The method of claim 16, wherein the multiple motion models include Constant Velocity (CV), Constant Acceleration (CA), Horizontal Coordinated Turn (HCT) and 3D Coordinated Turn (3DCT).
  19. 19. The method of claim 17, wherein the interactive multiple model (IMM) filter includes a plurality of filters (FILTER 1 , FILTER 2... FILTER N) that are matched with respective motion models CV, CA, HCT, and 3DCT for estimating the maneuvering state of the point target at successive sampling instances, a mixer for mixing state estimates provided by plurality of filters (FILTER 1 , FILTER 2... FILTER N) from a previous sampling instance to set the initial conditions for each of the plurality of filters that is a most suitable model at a current sampling instance, and a combiner that combines state mean m(k) and covariance P(k) and outputs the extracted features.
  20. 20. The method of claim 14, wherein the trained decision-making includes training a decision tree for discriminating between UAV, birds and ground targets using the features extracted by the interactive multiple model (IMM) filter to determine classification of the point target.

Description

METHOD AND SYSTEM FOR TARGET DETECTION AND CLASSIFICATION TO AID DRONE DEFENCE SYSTEMS BACKGROUND OF THE INVENTION 1. Field of the Invention [0001] The present invention is directed to drone detection and classification, and in particular to a model-based artificial intelligence (Al) system and method that applies trained decisionmaking on extracted features for target classification. 2. Description of the Related Art [0002] The recent popularization and misuse of Uncrewed Aircraft Systems (UASs), or also known as RPAS (Remotely Piloted Aircraft Systems) and in particular small UAS (sUAS) aircraft (commonly referred to as a “drones”), has highlighted certain risks in low altitude drone flight, especially in urban areas, involving privacy constraints and collision hazards with ground-based structures and other low-flying aircraft and birds. This has given rise to a growing and critical need for anti-drone systems that use effective detection and surveillance algorithms and technologies. [0003] Existing technologies rely on sensor data from radio sources, radar, acoustics, and/or visual sensors for detection and identification of drone targets. In particular, existing sensor technologies for drone target detection include 1) radio frequency (RF) sensors that scan for the RF broadcast from drones to their ground control stations, 2) sonic sensors that listen for the sonic signature of the drones and propellers, 3) radar sensors that transmit radio waves to an object and use the reflection signal to determine the range and other information about a target, and 4) visual sensors, including electro-optical, infra-red, and laser-based cameras (LiDAR), to provide imaging data in the form of still images and/or videos for further human assessment or post-processing by computer algorithms designed to carry out detection and identification. [0004] Taha and Shoufan provide a review of existing literature on drone detection and classification using machine learning methods[B. Taha and A. Shoufan, "Machine Learning- Based Drone Detection and Classification: State-of-the-Art in Research," in IEEE Access, vol. 7, pp. 138669-138682, 2019, doi: 10.1109/ACCESS.2019.2942944], [0005] At low elevation, flying birds and drones are the principal low speed targets that require disambiguation. Flying birds have similar Radar Cross Section (RCS), same velocity range, similar signal fluctuation, and approximate signal amplitude to drones[J. Gong, J. Yan, D. Li, D. Kong, and H. Hu, “Interference of radar detection of drones by birds,” Progress In Electromagnetics Research, vol. 81 , pp. 1-11 , 2019], The similarity between small drones and birds therefore presents major challenges associated with class separation The problem of identifying birds vs. drones has been discussed in the literature [R. Kretzschmar, N. Karayiannis, and H. Richner, “A comparison of feature sets and neural network classifiers on a bird removal approach for wind profiler data,” in Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks. IJCNN 2000. Neural Computing: New Challenges and Perspectives for the New Millennium. IEEE, 2000, vol. 2, pp. 279-284, and S. Haykin and C. Deng, “Classification of radar clutter using neural networks,” IEEE Transactions on Neural Networks, vol. 2, no. 6, pp. 589-600,1991], [0006] Visual-based methods of detection suffer from certain drawbacks due to different weather conditions, while acoustic-based methods are very sensitive to ambient noise and therefore tend to fail in loud areas, and Radio-frequency (RF) based techniques are not suitable for autonomous flying drones [P. Molchanov, R.I.A. Harmanny, J.J.M. de Wit, K. Egiazarian, and J. Astola, “Classification of small uavs and birds by micro-doppler signatures,” International Journal of Microwave and Wireless Technologies, vol. 6, no. 3-4, pp. 435-444, 2014], [0007] It is also known in the art to discriminate drones and flying birds using Micro-Doppler (M- D) characteristics of a target [J.J.M. de Wit, R.I.A. Harmanny, and G. Premel-Cabic, “Micro- doppler analysis of small uavs,” in 2012 9th European Radar Conference. IEEE, 2012, pp. 210— 213, and J. L. Garry and G. E. Smith, “Experimental observations of micro-doppler signatures with passive radar,” IEEE Transactions on Aerospace and Electronic Systems, vol. 55, no. 2, pp. 1045-1052, 2019], Radar operates by detecting changes in the characteristics of a transmitted electromagnetic signal reflected from a target. For example, the carrier frequency of the returned signal is shifted if the target moves. In addition to the bulk rigid-body movement of the target, micro-motions such as vibration or rotations of any structure of the target (such as a propeller) also causes frequency modulation on the returned signal, a phenomenon referred to as Micro-Doppler effect [V. C. Chen, F. Li, S. -. Ho and H. Wechsler, "Micro-Doppler effect in radar: phenomenon, model, and simulation study," in IEEE Transactions