Search

CN-122024071-A - Agricultural pest identification method and system based on deep learning

CN122024071ACN 122024071 ACN122024071 ACN 122024071ACN-122024071-A

Abstract

The invention provides an agricultural pest identification method and system based on deep learning, wherein the method comprises the steps of constructing multi-source image data by collecting visible light images and near infrared images of crops, and modeling different shooting conditions by combining environment domain labels; and positioning a blade main body region on the basis, calculating lesion saliency and generating a target lesion candidate region so as to inhibit the interference of a background and a non-lesion region. Further, the visible light features and the near infrared features are extracted respectively, and cross-source attention fusion is performed under the constraint of the candidate lesion areas, so that the lesion decoupling fusion characterization is obtained. The training stage introduces cross-domain invariant constraint based on an environment domain, so that the model learns the plant diseases and insect pests invariant features insensitive to illumination, region and crop variety change.

Inventors

  • LI WANZHEN
  • Lin Dingshan

Assignees

  • 永春县农业科学研究所(永春县农业检测中心、永春县作物良种场)

Dates

Publication Date
20260512
Application Date
20260407

Claims (9)

  1. 1. The agricultural pest identification method based on deep learning is characterized by comprising the following steps of: S1, acquiring multi-source image data of crops to be identified, recording corresponding environment domain labels for each group of multi-source image data, wherein the environment domain labels are used for representing shooting environment differences; S2, performing normalization preprocessing on the multi-source image data to obtain normalized multi-source image data, and performing spatial registration and scale alignment on the normalized multi-source image data to generate aligned visible light images and aligned near infrared images in one-to-one correspondence; S3, extracting a disease spot candidate region based on the aligned visible light image and the aligned near infrared image to obtain a disease spot candidate region set, wherein the disease spot candidate region is used for limiting a leaf lesion region concerned by a model and inhibiting interference caused by background, vein texture, dew reflection or shadow; S4, inputting the aligned visible light image, the aligned near infrared image and the disease spot candidate region set into a multi-source fusion recognition network, respectively extracting visible light features and near infrared features, and executing cross-source attention fusion under the constraint of the disease spot candidate region set so as to obtain disease spot decoupling fusion features; S5, carrying out multi-domain combined training on the multi-source fusion recognition network based on the multi-domain training data set, respectively calculating recognition loss of the multi-source fusion recognition network on the sub-data set corresponding to each environmental domain label in the multi-domain combined training, introducing cross-domain invariable constraint to reduce difference between the recognition losses corresponding to each environmental domain label, and enabling the multi-source fusion recognition network to learn the plant diseases and insect pests invariable characteristics insensitive to environmental changes; S6, in the reasoning stage, any group of multi-source image data in the data set to be recognized is processed according to S2 to S4 to obtain corresponding disease spot decoupling fusion characteristics, a target recognition model is input for recognition reasoning, and a disease and pest recognition result corresponding to the disease spot decoupling fusion characteristics is output.
  2. 2. The method for identifying agricultural plant diseases and insect pests based on deep learning as set forth in claim 1, wherein S1 specifically includes: Collecting visible light images and near infrared images of crops to be identified at the same shooting position and the same shooting moment, and forming a group of multi-source image data by the visible light images and the near infrared images; Recording corresponding environment domain labels for each group of multi-source image data, wherein the environment domain labels at least comprise one of illumination conditions, areas or varieties and are used for representing shooting environment differences; Converging a plurality of groups of multi-source image data and environment domain labels thereof to form a multi-domain data set, wherein the multi-domain data set at least comprises a multi-domain training data set for model training and a data set to be identified for model reasoning; And establishing a unique index for each group of multi-source image data in the multi-domain data set, and binding the unique index with the environment domain label to form a traceable data entry so as to carry out domain division training and evaluation according to the environment domain label.
  3. 3. The method for identifying agricultural plant diseases and insect pests based on deep learning as set forth in claim 1, wherein S2 specifically includes: Respectively performing brightness and contrast normalization processing on each group of multi-source image data in the multi-domain data set to obtain a corresponding normalized visible light image and a normalized near infrared image; denoising the normalized visible light image and the normalized near-infrared image to obtain a corresponding denoised visible light image and denoised near-infrared image, wherein the denoised visible light image and the denoised near-infrared image are used for reducing the interference of sensor noise on the extraction of subsequent disease spots; Performing spatial registration processing on the denoised near-infrared image by taking the denoised visible light image as a reference image to obtain a registration near-infrared image corresponding to the pixel level of the denoised visible light image; and performing unified scale transformation and consistent clipping on the denoised visible light image and the registration near infrared image to generate an alignment visible light image and an alignment near infrared image which are in one-to-one correspondence, and taking the alignment visible light image and the alignment near infrared image as input for subsequent lesion candidate region extraction and feature fusion.
  4. 4. The method for identifying agricultural plant diseases and insect pests based on deep learning as set forth in claim 1, wherein S3 specifically includes: extracting a blade main body region based on the aligned visible light image to obtain a blade main body region, wherein the blade main body region is used for eliminating the influence of a background region on the extraction of a lesion candidate region; calculating lesion saliency in the blade main body area based on the aligned visible light image and the aligned near infrared image to obtain a lesion saliency map, wherein the lesion saliency is used for representing the saliency degree of suspected lesion pixels; threshold segmentation and connected domain analysis are carried out on the lesion saliency map, a plurality of suspected lesion areas are generated, and the suspected lesion areas form a lesion candidate area set; And screening the candidate area set of the lesion according to the consistency of the area, the shape or the texture, removing the pseudo area caused by vein texture, dew reflection, shadow or soil background, and obtaining the candidate area set of the target lesion for subsequent fusion constraint.
  5. 5. The method for identifying agricultural plant diseases and insect pests based on deep learning as set forth in claim 1, wherein S4 specifically includes: Inputting the aligned near-infrared image into a near-infrared feature branch to obtain near-infrared features; Performing region mask constraint on the visible light features and the near infrared features based on the target lesion candidate region set to obtain candidate region visible light features and candidate region near infrared features focused on the candidate lesion region; Performing cross-source attention fusion on the visible light features of the candidate region and the near infrared features of the candidate region to obtain fusion attention mapping, and generating a disease spot decoupling fusion feature according to the fusion attention mapping; And outputting the disease spot decoupling fusion characteristic as a unified characterization vector, and storing the unified characterization vector and the environment domain label in association in a training stage so as to support the subsequent multi-domain causal invariant training constraint.
  6. 6. The method for identifying agricultural plant diseases and insect pests based on deep learning as set forth in claim 1, wherein the step S5 specifically includes: Dividing the multi-domain training data set into a plurality of sub-domain data sets according to the environment domain labels, wherein each sub-domain data set corresponds to one environment domain label value or value combination; In training iteration, each sub-data set of the sub-domain is taken as input, and the sub-domain recognition loss of the target recognition model on the corresponding sub-data set of the sub-domain is calculated to obtain a plurality of sub-domain recognition losses; A cross-domain invariant constraint term is constructed based on the plurality of domain identification losses, and is used for measuring differences among the domain identification losses and punishing sensitivity of the target identification model to the environment domain labels.
  7. 7. The method for identifying agricultural plant diseases and insect pests based on deep learning according to claim 6, wherein S5 further comprises using a weighted combination of the regional identification loss and the cross-domain invariable constraint term as a training target, and updating parameters of the multi-source fusion identification network until convergence conditions are met, so as to obtain a target identification model meeting the cross-domain invariable constraint.
  8. 8. The method for identifying agricultural plant diseases and insect pests based on deep learning as set forth in claim 1, wherein S6 specifically includes: acquiring a group of multi-source image data to be inferred from a data set to be identified as inference input; S2 and S3 are sequentially executed on the reasoning input, and an aligned visible light image, an aligned near infrared image and a target lesion candidate region set are obtained; s4, performing reasoning input to obtain a disease spot decoupling fusion characteristic corresponding to the reasoning input; And inputting the disease spot decoupling fusion characteristics into a target recognition model for recognition and reasoning, and outputting a disease and insect pest recognition result corresponding to the reasoning input, wherein the disease and insect pest recognition result at least comprises disease and insect pest categories.
  9. 9. An agricultural pest identification system based on deep learning, based on the agricultural pest identification method based on deep learning as claimed in any one of claims 1 to 8, comprising: the data acquisition module acquires visible light images and near infrared images of crops to be identified, and generates a unique index for each group of acquired images so as to form traceable multi-source image data; the environment domain labeling module records corresponding environment domain labels for the multi-source image data, and the environment domain labels are used for representing shooting environment differences; the preprocessing and alignment module is used for performing normalization processing, noise suppression processing, spatial registration processing and scale and clipping alignment processing on the visible light image and the near infrared image to generate an aligned visible light image and an aligned near infrared image which are in one-to-one correspondence; The disease spot candidate region extraction module is used for positioning the blade main body region based on the aligned visible light images, calculating the significance of the lesions by combining the aligned near infrared images and generating a disease spot candidate region set in the blade main body region; The multi-source feature extraction module is used for respectively carrying out feature extraction on the aligned visible light images and the aligned near infrared images, generating visible light features and near infrared features, and carrying out channel dimension alignment on the visible light features and the near infrared features; the candidate region constraint module generates a candidate region constraint mask according to the target lesion candidate region set, and performs region constraint on the visible light features and the near infrared features based on the candidate region constraint mask; The cross-source attention fusion module is used for executing cross-source attention fusion on the candidate region visible light features and the candidate region near infrared features to generate fusion attention mapping and cross-source attention fusion features; The feature aggregation module aggregates the cross-source attention fusion features under the constraint of the candidate region constraint mask to generate a lesion decoupling fusion characterization vector; The classification and identification module outputs a disease and pest type prediction result and a corresponding prediction confidence based on the disease and pest decoupling fusion characterization vector; The cross-domain invariable training module divides training data into a plurality of sub-data sets of sub-domains according to the environmental domain labels in the training stage, calculates the domain identification loss under each environmental domain respectively, builds a cross-domain invariable constraint item, and enables the identification loss of the model under different environmental domains to keep consistent, so as to obtain the target identification model.

Description

Agricultural pest identification method and system based on deep learning Technical Field The invention relates to the technical field of agricultural intelligent perception and computer vision, in particular to an agricultural pest and disease damage identification method and system based on deep learning. Background The method is characterized in that the method is used for identifying the plant diseases and insect pests, the method is used for accurately controlling and intelligently managing the plant diseases and insect pests in agricultural production, the plant diseases and insect pests in the prior art is mostly dependent on manual inspection or an image identification method based on a single visible light image, the manual mode is greatly influenced by subjective experience and is low in efficiency, the requirements of large-area continuous monitoring are difficult to meet, and the method is used for deeply learning based on a single image, although the method has a certain identification effect in a controlled environment, the problem of insufficient stability in a real field environment generally exists. The agricultural plant diseases and insect pests have obvious unstructured characteristics in natural environments, wherein the appearance difference of the same disease is obvious under different illumination conditions, different regional soil backgrounds, different crop varieties and different growth periods, and meanwhile, the factors such as vein textures, dew reflection, shadow shielding, background weeds and the like are easily mixed with the disease spot characteristics, so that the existing model is often identified by depending on environment-related 'apparent shortcuts'. When the shooting environment or crop conditions change, the model recognition performance is drastically reduced, and it is difficult to generalize across environments. Therefore, we provide an agricultural pest identification method and system based on deep learning. The above information disclosed in the background section is only for enhancement of understanding of the background of the disclosure and therefore it may include information that does not form the prior art that is already known to a person of ordinary skill in the art. Disclosure of Invention The invention aims at overcoming the defects of the prior art, and provides an agricultural pest identification method and system based on deep learning, which solve the technical problems in the background art. In order to achieve the above purpose, the present invention provides the following technical solutions: an agricultural pest identification method based on deep learning comprises the following steps: s1, acquiring multi-source image data of crops to be identified, wherein the multi-source image data at least comprises a visible light image and a near infrared image at the same shooting position, recording corresponding environment domain labels for each group of multi-source image data, wherein the environment domain labels are used for representing shooting environment differences and at least comprise one of illumination conditions, areas or varieties; s2, performing normalization preprocessing on the multi-source image data to obtain normalized multi-source image data, performing spatial registration and scale alignment on the normalized multi-source image data to generate aligned visible light images and aligned near infrared images which are in one-to-one correspondence, and taking the aligned visible light images and the aligned near infrared images as input of subsequent feature extraction and fusion; S3, extracting a disease spot candidate region based on the aligned visible light image and the aligned near infrared image to obtain a disease spot candidate region set, wherein the disease spot candidate region is used for limiting a leaf lesion region concerned by a model and inhibiting interference caused by background, vein texture, dew reflection or shadow; S4, inputting the aligned visible light image, the aligned near infrared image and the disease spot candidate region set into a multi-source fusion recognition network, respectively extracting visible light features and near infrared features, and executing cross-source attention fusion under the constraint of the disease spot candidate region set to obtain disease spot decoupling fusion features, wherein the disease spot decoupling fusion features are used for representing stable visual features of diseases and insect pests and reducing the background related feature duty ratio; S5, in the training stage, based on a multi-domain training data set, carrying out multi-domain combined training on the multi-source fusion recognition network, respectively calculating recognition loss of the multi-source fusion recognition network on sub-data sets corresponding to all environment domain labels, introducing cross-domain invariable constraint to reduce difference between the recognition losses co