Search

CN-121982492-A - Multi-mode detection method and system for diseases and insect pests

CN121982492ACN 121982492 ACN121982492 ACN 121982492ACN-121982492-A

Abstract

The invention belongs to the technical field of intelligent pest control and provides a multi-mode detection method and system for pest and disease, wherein an intelligent agricultural closed loop system based on the Internet of things and taking artificial intelligence as a core is constructed through a system initialization and equipment registration stage, a data acquisition and sensing stage, a pest and disease identification stage, a multi-mode fusion and decision generation stage and an execution control and cloud edge coordination stage, and the problems of low disease identification precision, delayed management decision and high intelligent transformation cost in traditional planting are solved through a lightweight identification model, multi-mode data fusion analysis and edge calculation deployment.

Inventors

  • GUAN JIANFENG
  • LI ZIBIN
  • Hou Erwei
  • HU YUE
  • JIN SHIYAO
  • ZHANG HAOBO
  • CAO SHUHAN
  • QIN ZIHE
  • WU JIAYU
  • Jiang Xiankun
  • KANG CHENYU

Assignees

  • 北京邮电大学

Dates

Publication Date
20260505
Application Date
20260210

Claims (10)

  1. 1. The multi-mode detection method for the plant diseases and insect pests is characterized by comprising the following steps of: A system initialization and device registration stage; the data acquisition and perception stage comprises the steps of acquiring environment data and image data, preprocessing the data, encrypting and transmitting the data to an edge computing node; In the disease and pest identification stage, based on the edge computing nodes, image analysis and disease and pest detection are completed through a disease and pest identification model, and a detection result is generated; a multi-mode fusion and decision generation stage; and executing a control and cloud edge cooperation stage.
  2. 2. The method for detecting multiple modes of plant diseases and insect pests according to claim 1, wherein in the system initialization and equipment registration stage, including equipment identity registration and node initialization, each sensing terminal performs identity registration through a registration cloud server, wherein: The registration server checks and verifies the legality of the terminal, if the verification is passed, a unique digital identity is generated for the sensing terminal, and an anonymous identity is generated by using a hash algorithm; And initializing the nodes, namely distributing local task parameters, a communication key and a data caching path for each perception terminal by the system, and establishing a secure communication channel with the edge computing node.
  3. 3. The multi-mode detection method for plant diseases and insect pests according to claim 2, wherein the sensing terminal comprises a mobile inspection vehicle and an internet of things sensor network.
  4. 4. The method for multi-mode detection of disease and insect pests according to claim 3, wherein in the data acquisition and sensing stage, the sensor network of the internet of things is used for acquiring environmental data, and the mobile inspection vehicle is used for acquiring image data, wherein: The environmental data comprise air temperature and humidity, soil humidity, illumination intensity and CO 2 concentration; The image data includes images of crop canopy, fruit and leaves.
  5. 5. The method for multi-modal detection of pests of claim 4, wherein the data preprocessing and encrypted transmission includes: And then the processed data is encrypted and transmitted through a lightweight communication protocol, and the encrypted data is uploaded to an edge computing node.
  6. 6. A method of multi-modal detection of pests according to any of claims 1 to 5, wherein in the pest identification phase, the pest identification model is deployed at edge computing nodes; The pest and disease identification model is based on YOLOv n Backbone networks, and a three-level enhancement architecture comprising a Backbone network Backbone, a Backbone network Neck and a detection Head is constructed, wherein: The Backbone network Backbone is characterized by extracting features based on a YOLOv C2f residual block, and respectively embedding bilinear fusion cooperative attention modules BFSA after four feature levels of different scales, namely P2/4, P3/8, P4/16 and P5/32; a neck network Neck, performing feature pyramid fusion by adopting an FPN+PAN structure, and inserting a residual feature reconstruction convolution module R-SCConv after each up-sampling operation; detecting Head, namely respectively optimizing classification and positioning tasks based on YOLOv decoupling detection Head design, and replacing a positioning loss function with a gradient weighted cross-ratio loss function GWIoU; The bilinear fusion co-attention BFSA module enhances the fusion between small object features and contextual features through a channel and spatial dual-attention mechanism, comprising: SCSA submodule realizes multi-scale space decomposition by setting different head_num parameters; ParNetAttention submodules, namely capturing local and global complementary features by adopting a parallel convolution structure of 1 multiplied by 1 and 3 multiplied by 3; BilinearFusion sub-module, which is to perform second order interaction fusion on the outputs of SCSA sub-module and ParNetAttention sub-module, give two feature mapping x 1 and x 2 , firstly, obtain u and v through 1 x1 convolution dimensionality reduction, then perform Hadamard product to obtain bilinear interaction feature b=u+v, finally splice the original feature and interaction feature and output fusion result through 1 x1 convolution: ; The residual feature reconstruction convolution module R-SCConv includes: The space reconstruction unit SRU adopts an adaptive gating mechanism to filter noise, and given an input characteristic X, calculates an importance mask through a trainable threshold T and a channel weight gamma ; The channel reconstruction unit CRU performs channel recombination of frequency perception through a double-path structure, and each path sequentially executes 1X 1 convolution, group convolution GWC and point-by-point convolution PWC so as to capture a local channel mode and realize global fusion; residual connection, namely, the final output is enhanced by a residual path, and the information flow is expressed as: 。
  7. 7. The method for detecting multiple modes of plant diseases and insect pests according to claim 1, wherein the multimode fusion and decision generation stage is based on a multimode decision engine deployed at a cloud or an edge server to complete fusion analysis of image recognition results and environmental data, and to realize etiology inference and control scheme generation, and comprises the following steps: The system matches the pest and disease identification result with the environmental sensor parameters of the time period through a time synchronization mechanism to construct a visual feature vector And environmental feature vector Mapping the two types of features to a unified semantic space by a transducer encoder layer, formulated as: ; the decision making process comprises the steps that the fused characteristic representation Z is input to a knowledge reasoning module, and the knowledge reasoning module adopts a mixed framework of a rule engine and a learning model: And outputting the result and grading treatment, namely structurally outputting the decision result, wherein the decision result comprises a disease risk grade and a recommended prevention and treatment scheme list.
  8. 8. The method for multi-modal detection of pest and disease damage according to claim 7, wherein the rule engine performs deterministic logic judgment based on a predefined agricultural expert experience library and a pathology model, and the learning model performs self-adaptive optimization through knowledge graph reasoning and reinforcement learning algorithm.
  9. 9. The method of claim 1, wherein the performing a control and cloud edge coordination phase comprises: the mobile inspection vehicle moves to a target area according to the navigation instruction, and performs fixed-point photographing, medicine spraying prompt or sampling action; Data feedback and model self-learning, namely all result data in the execution process are returned to the system; edge node light weight retraining, namely each edge computing node is used for carrying out light weight retraining according to the newly collected local data; the cloud end concentrated retraining and collaborative updating, wherein a cloud end server periodically collects model parameter updating of each edge node and adopts a federal average algorithm for aggregation: generating a globally optimized main model; and (3) security and traceability, namely carrying out digital signature on all data and operation instructions through an asymmetric encryption algorithm, and attaching a time stamp which is time-stamped by a blockchain technology or a trusted center to form a tamper-proof traceable operation log chain.
  10. 10. The disease and pest multi-mode detection system is characterized by comprising a mobile inspection vehicle, an internet of things sensor network, an edge computing node, a cloud server and a user interaction terminal based on the disease and pest multi-mode detection method described in any one of claims 1-9.

Description

Multi-mode detection method and system for diseases and insect pests Technical Field The invention belongs to the technical field of intelligent pest control, and particularly relates to a multi-mode pest detection method and system. Background In the field of disease and pest identification and prevention and control, the traditional management mode mainly relies on the experience of a planter to carry out visual inspection, and the mode is low in efficiency, is limited by the professional level of individuals, has extremely low identification degree on tiny disease spots presented in early disease, and causes false detection and omission rate to be high. Meanwhile, the risk of exceeding the standard of agricultural product pesticide residue is also caused by over-depending on pesticide spraying. In recent years, smart agricultural technologies, typified by artificial intelligence and the internet of things, have provided potential solutions to the above-mentioned problems. The target detection model based on deep learning, particularly the YOLO series, has great potential in the field of crop disease identification. In the prior art, researchers improve the recognition accuracy of various crop diseases and insect pests by improving the model structure, such as introducing a attention mechanism in YOLOv. However, when these prior art techniques are directly applied to specific crop planting scenarios, a series of technical bottlenecks are still exposed to be solved. Firstly, on the core recognition algorithm, the general target detection model has insufficient disease recognition precision for certain crops with specific small scale and similar characteristics. The training of the model is seriously dependent on a high-quality data set, and the existing public data set lacks large-scale and fine labeling data which are specially aimed at various diseases and insect pests and nutrition deficiency symptoms of specific crops, so that the generalization capability of the model is weak, and the performance of the model is obviously reduced under the condition of complex illumination and shielding in a real field. Secondly, at the system architecture level, the existing intelligent agriculture solution is dependent on cloud computing, and real-time response is difficult to ensure in a field environment with poor network conditions. While some research attempts have been directed to the introduction of edge computing devices (e.g., raspberry pie) for local processing, simple model deployment has not fundamentally solved the problem of efficient fusion of multimodal data (e.g., image, temperature and humidity, soil data). The sampling frequency and the data format of different sensors are different, so that the data synchronism is poor, and a unified and accurate decision basis is difficult to form. In addition, the hardware deployment cost of the existing system is high, and the contradiction between the energy consumption and the reliability under the severe agricultural environment is outstanding, so that the existing system is difficult to popularize in small and medium-sized farms with sensitive cost. Therefore, a technical scheme for special crop planting management, which can achieve high-precision identification, strong environmental adaptability, low-cost deployment and intelligent decision-making, is urgently needed in the field so as to break through the limitation of the prior art and truly promote intelligent upgrading of the planting industry. The common technical schemes for identifying the plant diseases and insect pests mainly comprise the following steps: 1. In the aspect of intelligent identification of diseases and insect pests, an identification scheme based on a general target detection model is the most common technical path at present. The scheme is generally directly implemented by adopting common model architectures such as YOLOv, YOLOv and the like, and training and deployment are carried out by utilizing the collected crop images. For example, technical verification shows that the YOLOv n model has an overall identification accuracy (Precision) of 0.831 for crop diseases on a specific dataset, and an average accuracy (mAP 50) of 0.616, which reveals a basic target detection capability. However, this solution has the inherent disadvantage that its general design does not adequately take into account the particularities of a particular crop disease. For some disease categories with fine disease spot characteristics and low contrast ratio with healthy tissues, the model identification accuracy is obviously reduced, mAP50 is only 0.514 and 0.364 respectively, and the core problem of insufficient perception capability on small target diseases is exposed. In addition, the model has poor adaptability to complex imaging conditions (such as illumination change and branch and leaf shielding) in the field, and the robustness is difficult to meet the requirements of production-level application. If