Search

CN-122024114-A - Unmanned aerial vehicle target classification matching striking control method and system based on deep learning

CN122024114ACN 122024114 ACN122024114 ACN 122024114ACN-122024114-A

Abstract

The invention relates to the technical field of unmanned aerial vehicle control and discloses an unmanned aerial vehicle target classification matching striking control method and system based on deep learning, wherein the method comprises the following steps of S1, collecting original environment data of an unmanned aerial vehicle flight area, preprocessing the original environment data and generating multi-mode perception data; S2, inputting the multi-mode sensing data into a pre-constructed deep learning classification network, extracting multi-level depth features of the target through the deep learning classification network, classifying and identifying the target based on the multi-level depth features, and outputting target category information and target position information of the target. The multi-mode sensing data is input into the deep learning classification network to perform target recognition, the multi-mode fusion sensing mode can fully exert the complementary advantages of different sensors, high target recognition accuracy can be maintained in complex environments such as night, low illumination, severe weather and the like, and the adaptability of the system to environmental changes is remarkably improved.

Inventors

  • CUI LIANGFEI
  • GUO YANWEN
  • GE LUYONG
  • LIANG JING
  • FAN SHULIN
  • LIU HAIFENG
  • SI BAOFENG
  • HUANG XINYUN
  • WANG WENXI

Assignees

  • 山西中北新缘智造科技有限公司

Dates

Publication Date
20260512
Application Date
20260413

Claims (10)

  1. 1. The unmanned aerial vehicle target classification matching striking control method based on deep learning is characterized by comprising the following steps of: s1, acquiring original environment data of a flight area of an unmanned aerial vehicle, and preprocessing the original environment data to generate multi-mode perception data; s2, inputting the multi-mode sensing data into a pre-constructed deep learning classification network, extracting multi-level depth features of the target through the deep learning classification network, classifying and identifying the target based on the multi-level depth features, and outputting target category information and target position information of the target; S3, receiving target category information and target position information, carrying out matching decision on a target according to a preset matching rule base, and determining a treatment strategy matched with the target and a corresponding execution unmanned aerial vehicle; s4, generating a flight guidance instruction and a target locking instruction for executing the unmanned aerial vehicle according to the treatment strategy and the target position information; S5, sending the flight guidance instruction and the target locking instruction to the unmanned aerial vehicle, controlling the unmanned aerial vehicle to fly against the target area, and performing locking and accurate action operation on the target.
  2. 2. The method according to claim 1, wherein the preprocessing of the original environmental data in step S1 generates multi-modal sensing data, further comprising: S11, denoising and enhancing the image data in the original environment data, filtering out image noise and improving image contrast, and generating enhanced image data; S12, filtering and point cloud registration processing are carried out on radar echo data in the original environment data, abnormal echo signals are removed, and multi-frame point cloud data are unified to the same coordinate system to generate structured point cloud data; S13, carrying out non-uniformity correction and pseudo-color mapping on infrared thermal imaging data in original environment data, eliminating stripe noise caused by inconsistent response of a detector, mapping single-channel gray scale data into three-channel color data, and generating enhanced infrared data; and S14, performing space-time synchronous registration on the enhanced image data, the structured point cloud data and the enhanced infrared data, and aligning the three-mode data to the same space-time reference to generate multi-mode sensing data.
  3. 3. The method according to claim 1, wherein the deep learning classification network in step S2 is configured with a backbone feature extraction sub-network, a multi-scale feature fusion sub-network, and a classification recognition sub-network; S21, carrying out rolling and downsampling operations layer by layer from multi-mode sensing data through a backbone feature extraction sub-network, and extracting initial depth feature images of different levels; S22, performing up-sampling operation on the initial depth feature images of different levels through a multi-scale feature fusion sub-network to restore the feature image resolution, performing splicing operation to combine the feature images of different levels in the channel dimension, performing convolution fusion operation to integrate cross-channel information on the combined features, and generating an enhanced multi-scale fusion feature image; S23, performing sliding window convolution detection on the basis of the multi-scale fusion feature map through the classification and identification sub-network, performing target class confidence prediction and bounding box position offset regression on each preset anchor point frame, and outputting target class information and target position information.
  4. 4. The method according to claim 1, wherein the step S3 of performing a matching decision on the target according to a preset matching rule base, determining a treatment policy matching the target and a corresponding performing unmanned aerial vehicle, further comprises: S31, acquiring target category information, and searching a preset treatment mode which is stored in association with the target category information in a matching rule base, wherein the preset treatment mode comprises a task executor type requirement and a treatment priority level; S32, acquiring target position information and current state information of each standby unmanned aerial vehicle, wherein the current state information comprises position coordinates, residual electric quantity and mounted task executor types; S33, according to the task executor type requirement and the treatment priority level in the preset treatment mode, the target position information and the current state information of each standby unmanned aerial vehicle, taking Euclidean distance between the position coordinates of each standby unmanned aerial vehicle and the target position coordinates, whether the residual electric quantity of each standby unmanned aerial vehicle meets the task duration requirement, whether the mounted task executor type of each standby unmanned aerial vehicle is matched with the task executor type requirement or not as constraint conditions, executing task allocation and calculation, screening out the executing unmanned aerial vehicle for executing the task from each standby unmanned aerial vehicle, and synchronously generating a treatment strategy aiming at the executing unmanned aerial vehicle.
  5. 5. The method according to claim 4, wherein the treatment strategy determined in step S33 specifically includes an action timing parameter, an action route parameter, a task executor emission parameter, and a task achievement effect evaluation index; The action opportunity parameters comprise the time of entering the action window and the time of exiting the action window; The action route parameters comprise route point coordinates of an entering route, route point coordinates of an action route and route point coordinates of a departure route; the task executor emission parameters comprise an emission distance, an emission angle and a fuze working mode; the task achievement effect evaluation index comprises an image characteristic change threshold value and a radar echo characteristic change threshold value after the target is acted on.
  6. 6. The method according to claim 1, wherein generating a flight guidance instruction and a target locking instruction for the execution of the unmanned aerial vehicle according to the treatment strategy and the target position information in step S4 further comprises: S41, performing track planning according to action route parameters and target position information in a treatment strategy, and generating a flight guidance instruction containing a route point sequence and flight attitude requirements, wherein the flight guidance instruction is used for guiding an unmanned aerial vehicle to sequentially fly along the planned route points and keeping a designated pitch angle and a designated roll angle in each route section; S42, generating a target locking instruction comprising a fire control resolving parameter and a locking time according to the emission parameter of the task executor in the treatment strategy, wherein the target locking instruction is used for controlling a task guiding radar of the unmanned aerial vehicle to execute locking irradiation on a space range corresponding to target position information when the locking time arrives, and finishing final binding before the task executor emits according to the fire control resolving parameter.
  7. 7. The method according to claim 1, wherein after the step S5, further comprises: s61, receiving target state evaluation data returned by the unmanned aerial vehicle after the unmanned aerial vehicle executes a preset action operation, wherein the target state evaluation data comprises target area image data after action and target area radar echo data after action; s62, comparing the target state evaluation data with a preset task achievement effect evaluation index, comparing the image features in the extracted target area image data with the image feature change threshold in the task achievement effect evaluation index, and comparing the radar features in the extracted target area radar echo data with the radar echo feature change threshold in the task achievement effect evaluation index to generate a task effect evaluation report; And S63, when the task effect evaluation report indicates that the target does not reach the preset state threshold, the step S3 is triggered again, and a secondary treatment decision is executed on the target.
  8. 8. Deep learning-based unmanned aerial vehicle target classification matching hit control system, characterized in that the system is used for realizing the deep learning-based unmanned aerial vehicle target classification matching hit control method according to any one of claims 1 to 7, and the system comprises: the sensing data processing module is used for collecting original environment data of the unmanned aerial vehicle flight area, preprocessing the original environment data and generating multi-mode sensing data; The deep learning recognition module is connected with the perception data processing module, receives multi-mode perception data, extracts multi-level depth features of the target through a pre-built deep learning classification network, classifies and recognizes the target based on the multi-level depth features, and outputs target category information and target position information of the target; The matching decision module is connected with the deep learning identification module, receives the target category information and the target position information, performs matching decision on the target according to a preset matching rule base, and determines a treatment strategy matched with the target and a corresponding execution unmanned aerial vehicle; The instruction generation module is connected with the matching decision module and generates a flight guidance instruction and a target locking instruction for the unmanned aerial vehicle according to the treatment strategy and the target position information; And the communication control module is connected with the instruction generation module, and is used for sending the flight guidance instruction and the target locking instruction to the unmanned aerial vehicle, controlling the unmanned aerial vehicle to fly against the target area and performing locking and accurate action operation on the target.
  9. 9. The system of claim 8, wherein the deep learning identification module is further configured with: The backbone feature extraction unit is used for carrying out rolling and downsampling operations layer by layer from the multi-mode sensing data and extracting initial depth feature images of different levels; The multi-scale feature fusion unit is connected with the backbone feature extraction unit, performs up-sampling operation on the initial depth feature images of different levels to restore the feature image resolution, performs splicing operation to combine the feature images of different levels in the channel dimension, performs convolution fusion operation to integrate cross-channel information of the combined features, and generates an enhanced multi-scale fusion feature image; the classification and identification unit is connected with the multi-scale feature fusion unit, performs sliding window convolution detection based on the multi-scale fusion feature map, performs target category confidence prediction and bounding box position offset regression on each preset anchor point frame, and outputs target category information and target position information.
  10. 10. The system of claim 8, wherein the matching decision module is further configured with: The category query unit acquires target category information, and searches a preset treatment mode which is stored in association with the target category information in a matching rule base, wherein the preset treatment mode comprises a task executor type requirement and a treatment priority level; The system comprises a state acquisition unit, a task distribution unit, a class inquiry unit and a state acquisition unit, wherein the state acquisition unit acquires target position information and current state information of each standby unmanned aerial vehicle, the current state information comprises position coordinates, residual electric quantity and a mounted task executor type, the task distribution unit is connected with the class inquiry unit and the state acquisition unit, and is used for executing task distribution calculation according to task executor type requirements and treatment priority levels in a preset treatment mode, the target position information and the current state information of each standby unmanned aerial vehicle, whether the Euclidean distance between the position coordinates and the target position coordinates of each standby unmanned aerial vehicle and the residual electric quantity of each standby unmanned aerial vehicle meet task endurance requirements or not and whether the mounted task executor type of each standby unmanned aerial vehicle is matched with the task executor type requirements or not, and is used for screening out the execution unmanned aerial vehicles of execution tasks and synchronously generating a treatment strategy for the execution unmanned aerial vehicle.

Description

Unmanned aerial vehicle target classification matching striking control method and system based on deep learning Technical Field The invention relates to the technical field of unmanned aerial vehicle control, in particular to an unmanned aerial vehicle target classification matching striking control method and system based on deep learning. Background Along with the rapid development and wide application of unmanned aerial vehicle technology, unmanned aerial vehicle supervision demands of low-altitude flight areas are increasingly prominent, various flight targets entering the areas need to be effectively identified and classified in urban complex environments or important management and control areas, and corresponding treatment strategies are matched according to target attributes, so that the accurate and intelligent management and control of the flight targets are realized. The existing unmanned aerial vehicle target recognition and control method generally adopts a single sensor to collect environmental data, for example, only an optical camera is used for collecting image information to carry out target recognition, the single-mode sensing mode has obvious limitation in practical application, namely, under night or low-illumination environment, the optical image quality is seriously reduced, the target recognition accuracy is greatly reduced, under the severe weather conditions such as haze, rain and snow, and the like, the sensing capability of the single sensor is further limited, complete target characteristic information is difficult to obtain, in addition, after the target is recognized, the existing method often adopts a simple nearby assignment rule to distribute unmanned aerial vehicle to execute tasks, the matching relationship between the target category and the type of an unmanned aerial vehicle mounted task executor is not fully considered, the condition that the type of the task executor is not matched easily occurs, the task execution efficiency is low or the task execution fails, Disclosure of Invention The invention aims to provide an unmanned aerial vehicle target classification matching striking control method and system based on deep learning, so as to solve the problems in the background technology. In order to achieve the purpose, the invention provides the technical scheme that the unmanned aerial vehicle target classification matching striking control method based on deep learning comprises the following steps: s1, acquiring original environment data of a flight area of an unmanned aerial vehicle, and preprocessing the original environment data to generate multi-mode perception data; s2, inputting the multi-mode sensing data into a pre-constructed deep learning classification network, extracting multi-level depth features of the target through the deep learning classification network, classifying and identifying the target based on the multi-level depth features, and outputting target category information and target position information of the target; S3, receiving target category information and target position information, carrying out matching decision on a target according to a preset matching rule base, and determining a treatment strategy matched with the target and a corresponding execution unmanned aerial vehicle; s4, generating a flight guidance instruction and a target locking instruction for executing the unmanned aerial vehicle according to the treatment strategy and the target position information; S5, sending the flight guidance instruction and the target locking instruction to the unmanned aerial vehicle, controlling the unmanned aerial vehicle to fly against the target area, and performing locking and accurate action operation on the target. As a preferred technical solution of the present invention, the preprocessing of the original environmental data in step S1 to generate multi-modal sensing data further includes: S11, denoising and enhancing the image data in the original environment data, filtering out image noise and improving image contrast, and generating enhanced image data; S12, filtering and point cloud registration processing are carried out on radar echo data in the original environment data, abnormal echo signals are removed, and multi-frame point cloud data are unified to the same coordinate system to generate structured point cloud data; S13, carrying out non-uniformity correction and pseudo-color mapping on infrared thermal imaging data in original environment data, eliminating stripe noise caused by inconsistent response of a detector, mapping single-channel gray scale data into three-channel color data, and generating enhanced infrared data; and S14, performing space-time synchronous registration on the enhanced image data, the structured point cloud data and the enhanced infrared data, and aligning the three-mode data to the same space-time reference to generate multi-mode sensing data. As a preferred technical scheme of the invention, the de