Search

CN-121074541-B - Grain pile surface pest detection model training method, detection method and equipment

CN121074541BCN 121074541 BCN121074541 BCN 121074541BCN-121074541-B

Abstract

The application provides a grain pile surface pest detection model training method, a detection method and equipment, and relates to the technical field of image recognition, wherein the training method comprises the steps of respectively inputting different types of data samples which are generated in advance based on grain pile surface image data of each pest boundary box label in various data fields into two model branches, so that the two model branches sequentially extract main network characteristics and adjust a back door of the data samples to obtain a target characteristic image and pest detection result data; and determining the causal feature alignment total loss corresponding to the current iteration round according to the target feature graphs obtained by the two model branches respectively so as to optimize the two model branches. The application can effectively improve the domain generalization capability of the grain pile surface pest detection model and the adaptability to complex and changeable actual detection scenes, can improve the application convenience and effectiveness of the grain pile surface pest detection model, and can further improve the accuracy and reliability of the grain pile surface pest detection result.

Inventors

  • TIAN JIDA
  • ZHOU HUILING
  • SUN MUYI
  • HU YIZHI
  • Tian Zezhao

Assignees

  • 北京邮电大学

Dates

Publication Date
20260512
Application Date
20250721

Claims (9)

  1. 1. The grain pile surface pest detection model training method is characterized by comprising the following steps of: In the current iteration round, respectively inputting different types of data samples which are respectively generated in advance based on grain pile surface image data of each grain pile marked with pest boundary box labels in various data fields into a first model branch and a second model branch, so that the first model branch and the second model branch respectively extract main network features and back gate adjustment on the different types of data samples to obtain target feature graphs and corresponding pest detection result data; if the current iteration round is the last iteration round, generating a grain pile surface pest detection model for outputting corresponding pest detection result data according to grain pile surface image data according to the first model branch; The first model branch and the second model branch comprise a convolutional neural network, a causal intervention characterization module, a comparison causal feature alignment module and a detection head which are sequentially connected, wherein weights are shared between the convolutional neural network in the first model branch and the convolutional neural network in the second model branch, and weights are shared between the detection heads in the first model branch and the second model branch; The convolutional neural network is used for extracting characteristics of an input data sample so as to output a corresponding original characteristic diagram; The causal intervention characterization module is used for determining a confounding factor estimated value of the current iteration round according to the original feature map corresponding to the data sample, the pest boundary box label corresponding to the data sample and the confounding factor estimated value obtained by the previous iteration round, and determining a causal intervention characterized target feature map corresponding to the original feature map according to the confounding factor estimated value of the current iteration round and the original feature map based on a cross attention mechanism; The comparison causal feature alignment module is used for transmitting the causal intervention characterized target feature graph to the detection head, acquiring positive samples and difficult negative samples from the causal intervention characterized target feature graph, calculating to obtain respective corresponding comparison losses of the positive samples according to the positive samples and the difficult negative samples acquired by the two comparison causal feature alignment modules, and determining the causal feature alignment total loss corresponding to the current iteration round based on the respective corresponding comparison losses of the positive samples; And the detection head is used for correspondingly outputting pest detection result data of the data sample according to the target feature map after causal intervention characterization.
  2. 2. The grain pile surface pest detection model training method according to claim 1, wherein the causal intervention characterization module comprises a confounding factor estimation unit and a causal intervention cross attention unit; The system comprises a data sample, a mixing factor estimation unit and a control unit, wherein the mixing factor estimation unit is used for acquiring a characteristic vector group formed by each characteristic vector falling into the pest boundary frame label from the original characteristic diagram corresponding to the data sample according to the pest boundary frame label corresponding to the data sample, taking each characteristic vector as a positive sample corresponding to the data sample to acquire a pest target prediction frame corresponding to the data sample based on each positive sample, calculating the mixing ratio of the pest target prediction frame corresponding to the data sample and the pest boundary frame label, taking the mixing ratio as a weight to determine the weighted average value of each positive sample in the characteristic vector group, and updating the weighted average value of the positive samples in an index moving average mode based on the mixing factor estimation value acquired in the previous iteration round to obtain the mixing factor estimation value of the current iteration round; The causal intervention cross attention unit is used for sequentially carrying out channel number adjustment and dimension conversion on the original feature map to obtain a corresponding adjusted feature vector, respectively carrying out back gate adjustment based on cross attention on the adjusted feature vector and the clutter factor estimated value of the current iteration round to obtain a feature vector after back gate adjustment, restoring the dimension of the feature vector after back gate adjustment to be the same as the dimension of the original feature map to obtain a corresponding dimension restoration feature map, splicing the dimension restoration feature map and the original feature map to obtain a corresponding spliced feature map, and reducing the dimension of the spliced feature map to be the same as the dimension of the original feature map to obtain a corresponding causal intervention characterized target feature map.
  3. 3. The method of claim 2, wherein the alignment module comprises a sample mining unit, and wherein the alignment modules of the first and second model branches share the same causal alignment unit; The sample mining unit is used for respectively taking other characteristic vectors except the positive samples in the causal intervention characterized target feature graph as negative samples, sequencing the negative samples in the order of high confidence from high to low, and selecting a plurality of negative samples with the same number as the positive samples from the sequenced negative samples in the order from front to back to respectively serve as difficult negative samples; The causal feature alignment unit is used for calculating and obtaining the contrast loss corresponding to each positive sample according to each positive sample and each difficult negative sample respectively obtained by the sample mining unit in the first model branch and the sample mining unit in the second model branch, aiming at reducing the feature distance between the positive samples and increasing the feature distance between the positive samples and the difficult negative samples, and carrying out weighted average calculation on the contrast loss corresponding to each positive sample to obtain the causal feature alignment total loss corresponding to the current iteration round.
  4. 4. The method of training a grain pile surface pest detection model according to claim 1, further comprising, before the data samples of different types that are to be generated in advance based on the grain pile surface image data of the respective pest-marked bounding box labels in the plurality of data fields, respectively: Acquiring respective corresponding image data sets of each preset data field, wherein each image data set comprises a plurality of grain pile surface image data marked with pest boundary box labels; randomizing the frequency domain and the space domain of the grain pile surface image data respectively to obtain enhanced image data corresponding to the grain pile surface image data respectively; And respectively generating different types of data samples according to the enhanced image data corresponding to each grain pile surface image data.
  5. 5. The method of training a grain pile surface pest detection model of claim 4, wherein the generating data samples of different types from each of the grain pile surface image data and the enhanced image data corresponding to each of the grain pile surface image data, respectively, comprises: Combining each image data set to obtain a first training data set, and taking each grain pile surface image data in the first training set as one type of data sample for training the first model branch; and taking the enhanced image data corresponding to each grain pile surface image data as another type of data sample for training the second model branch.
  6. 6. The method of training a grain pile surface pest detection model of claim 4, wherein the generating data samples of different types from each of the grain pile surface image data and the enhanced image data corresponding to each of the grain pile surface image data, respectively, comprises: Dividing each of the image data sets into a second training data set and a third training data set, respectively, such that the second training data set and the third training data set each contain the grain pile surface image data of different data fields; adding the enhanced image data corresponding to each grain pile surface image data in the second training data set into the second training data set, and taking each grain pile surface image data and each enhanced image data in the second training data set as one type of data sample for training the first model branch; And adding the enhanced image data corresponding to each grain pile surface image data in a third training data set into the third training data set, and taking each grain pile surface image data and each enhanced image data in the third training data set as another type of data sample for training the second model branch.
  7. 7. The method of training a grain pile surface pest detection model of claim 1, wherein generating a grain pile surface pest detection model for outputting corresponding pest detection result data from grain pile surface image data according to the first model branch comprises: and taking a model formed by the convolutional neural network and the detection head in the first model branch as a grain pile surface pest detection model for outputting corresponding pest detection result data according to grain pile surface image data, wherein the convolutional neural network in the grain pile surface pest detection model is connected with the detection head.
  8. 8. A method for detecting pests on a surface of a grain pile, comprising the steps of: collecting target grain pile surface image data in a grain storage space; Inputting the target grain pile surface image data into a grain pile surface pest detection model so that the grain pile surface pest detection model outputs pest detection result data corresponding to the target grain pile surface image data, wherein the grain pile surface pest detection model is trained by the grain pile surface pest detection model training method according to any one of claims 1 to 7 in advance.
  9. 9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor, when executing the computer program, implements the method of training the grain pile surface pest detection model of any one of claims 1 to 7 and/or implements the method of grain pile surface pest detection of claim 8.

Description

Grain pile surface pest detection model training method, detection method and equipment Technical Field The application relates to the technical field of image recognition, in particular to a grain pile surface pest detection model training method, a grain pile surface pest detection method and grain pile surface pest detection equipment. Background The propagation and invasion of grain storage pests are one of key factors threatening the safe storage of grains, and the pests caused by the propagation and invasion not only can damage the quantity and quality of grains, but also can possibly lead to food safety problems, so that the real-time monitoring of the pests is of great significance. In view of the characteristic that pests on the surface of a grain pile are easy to observe, the detection of the pests on the surface of the grain pile has become an extremely important supervision task. At present, the existing grain pile surface pest detection method realizes automatic detection of grain pile surface pests by adopting a target detection model. However, the existing grain pile surface pest detection method does not consider the domain generalization problem of the model, so that the model is effective only in a specific scene, and when the model faces a new detection scene, new data training data are required to be collected, and then fine adjustment is carried out on the original model. Therefore, a plurality of special models are required to be maintained for different environments, and the deployment cost is high. Disclosure of Invention In view of this, embodiments of the present application provide a grain pile surface pest detection model training method, detection method, and apparatus to obviate or ameliorate one or more of the disadvantages of the prior art. One aspect of the application provides a grain pile surface pest detection model training method, comprising the following steps: In the current iteration round, respectively inputting different types of data samples which are respectively generated in advance based on grain pile surface image data of each grain pile marked with pest boundary box labels in various data fields into a first model branch and a second model branch, so that the first model branch and the second model branch respectively extract main network features and back gate adjustment on the different types of data samples to obtain target feature graphs and corresponding pest detection result data; And if the current iteration round is the last iteration round, generating a grain pile surface pest detection model for outputting corresponding pest detection result data according to grain pile surface image data according to the first model branch. In some embodiments of the application, the first model branch and the second model branch comprise a convolutional neural network, a causal intervention characterization module, a causal feature alignment module and a detection head which are connected in sequence, wherein weights are shared between the convolutional neural network in the first model branch and the convolutional neural network in the second model branch, and weights are shared between the detection heads in the first model branch and the second model branch; The convolutional neural network is used for extracting characteristics of an input data sample so as to output a corresponding original characteristic diagram; The causal intervention characterization module is used for determining a confounding factor estimated value of the current iteration round according to the original feature map corresponding to the data sample, the pest boundary box label corresponding to the data sample and the confounding factor estimated value obtained by the previous iteration round, and determining a causal intervention characterized target feature map corresponding to the original feature map according to the confounding factor estimated value of the current iteration round and the original feature map based on a cross attention mechanism; The comparison causal feature alignment module is used for transmitting the causal intervention characterized target feature graph to the detection head, acquiring positive samples and difficult negative samples from the causal intervention characterized target feature graph, calculating to obtain respective corresponding comparison losses of the positive samples according to the positive samples and the difficult negative samples acquired by the two comparison causal feature alignment modules, and determining the causal feature alignment total loss corresponding to the current iteration round based on the respective corresponding comparison losses of the positive samples; And the detection head is used for correspondingly outputting pest detection result data of the data sample according to the target feature map after causal intervention characterization. In some embodiments of the application, the causal intervention characterization mod