Search

CN-121999478-A - Down feather identification method and system based on deep learning

CN121999478ACN 121999478 ACN121999478 ACN 121999478ACN-121999478-A

Abstract

The invention discloses a down recognition method and a down recognition system based on deep learning, wherein the down recognition method based on the deep learning comprises the following steps of S1, obtaining an original down image, preprocessing the obtained original down image to obtain a processed down feature image, S2, inputting the obtained feature image into an improved residual network WT-ResNet model, wherein the WT-ResNet model at least comprises a WT-ResNet layer, processing the input feature image by the WT-ResNet layer and then outputting a final feature image comprising deep semantic information, S3, inputting the output final feature image into a classification activation layer, processing the activation function to obtain a probability value of down image classification, and judging the down image to be fresh down or recycled down based on the probability value.

Inventors

  • LI ZIYIN
  • SUN GUOJUN
  • WANG JING
  • SU RINA
  • Lv Zebin

Assignees

  • 中国计量大学
  • 杭州海关丝类检测中心
  • 中国海关科学技术研究中心

Dates

Publication Date
20260508
Application Date
20251231

Claims (10)

  1. 1. The down feather identification method based on deep learning is characterized by comprising the following steps of: S1, acquiring an original down feather image, and preprocessing the acquired original down feather image to obtain a processed down feather feature map; S2, inputting the obtained feature map into an improved residual network WT-ResNet model, wherein the WT-ResNet model at least comprises a WT-ResNet layer, and outputting a final feature map comprising deep semantic information after the WT-ResNet layer processes the input feature map; S3, inputting the output final feature map into a classification activation layer, processing through an activation function to obtain probability values of down image classification, and judging that the down image is fresh down or recycled down based on the probability values.
  2. 2. The down identification method based on deep learning according to claim 1, wherein the step S2 specifically includes: S21, performing two-dimensional discrete wavelet transformation on the input feature map to generate four frequency components LL1, LH1, HL1 and HH1 of a second layer; s22, performing two-dimensional discrete wavelet transformation on the LL1 again to obtain four frequency components LL2, LH2, HL2 and HH2 of a third layer; S23, performing depth convolution on LL2, LH2, HL2 and HH2 to obtain four processed frequency components LL2', LH2', HL2', HH2'; S24, performing inverse two-dimensional discrete wavelet transform on LL2', LH2', HL2', HH2' to obtain a reconstructed third layer characteristic diagram F3; S25, fusing the component LL1 of the second layer with the characteristic diagram F3 of the third layer, and carrying out depth convolution and inverse two-dimensional discrete wavelet transformation on the fused four frequency components to obtain a reconstructed characteristic diagram F2 of the second layer; s26, performing deep convolution processing on the input feature map, and fusing the processed feature map with the second-layer feature map F2 to obtain a first-layer fused feature map F1'; S27, processing the first layer fusion feature map F1 'through a normalization layer and an activation layer, and fusing the first layer fusion feature map F1' with the input feature map to obtain an output feature map of a residual block; And S28, continuously and circularly stacking the WT-ResNet layers for N times according to the steps S21-S27 to obtain a final feature map containing deep semantic information.
  3. 3. The down recognition method based on deep learning according to claim 2, wherein the generating of the four frequency components LL1, LH1, HL1, HH1 of the second layer in the step S21 is expressed as: ; Wherein, the Representing the feature map of the original input, LL representing the low frequency components of the feature map of the original input, LL1 representing the low frequency components of the second layer, LH1, HL1, HH1 representing the horizontal, vertical and diagonal high frequency components of the second layer, respectively.
  4. 4. A down recognition method based on deep learning according to claim 3, wherein the four processed frequency components LL2', LH2', HL2', HH2' obtained in step S23 are expressed as: ; Where f LL2 denotes a low pass filter and f LH2 、f HL2 、f HH2 denotes a horizontal, vertical and diagonal high pass filter, respectively.
  5. 5. The down recognition method based on deep learning according to claim 4, wherein the reconstructed third layer feature map F3 is obtained in step S24, and is expressed as: ; Where f LL2' denotes a low pass filter and f LH2' 、f HL2' 、f HH2' denotes a horizontal, vertical and diagonal high pass filter, respectively.
  6. 6. The down recognition method based on deep learning according to claim 5, wherein the step S27 is performed to obtain an output feature map of the residual block, which is expressed as: ; Wherein y represents an output characteristic diagram of the residual block, W i represents convolution weight, BN (·) represents batch normalization processing; representing a residual map consisting of a series of convolution, normalization and activation functions.
  7. 7. The down recognition method based on deep learning according to claim 6, wherein the activation function used by the activation layer in the step S3 is a sigmoid activation function.
  8. 8. The down identification method based on deep learning according to claim 7, wherein the probability value of obtaining down image classification is expressed as: ; wherein z represents the value of the activation function which is transmitted into the input feature map after the previous layer of processing; Probability values representing down image classification.
  9. 9. The method for recognizing down based on deep learning according to claim 8, wherein the step S3 of determining whether the down image is fresh down or recycled down based on the probability value is specifically to determine whether the probability value is greater than 0.5, if so, the original down image is fresh down, and if not, the original down image is mixed down.
  10. 10. A recognition system based on a deep learning-based down recognition method according to any one of claims 1-9, comprising: the acquisition module is used for acquiring an original down feather image, and preprocessing the acquired original down feather image to obtain a processed down feather feature map; The processing module is used for inputting the obtained feature map into an improved residual network WT-ResNet model, wherein the WT-ResNet model at least comprises a WT-ResNet layer, and the WT-ResNet layer processes the input feature map and then outputs a final feature map containing deep semantic information; the identification module is used for inputting the output final feature image into the classification activation layer, obtaining the probability value of the down image classification through activation function processing, and judging the down image to be fresh down or recovered down based on the probability value.

Description

Down feather identification method and system based on deep learning Technical Field The invention relates to the technical field of image recognition, in particular to a down feather recognition method and system based on deep learning. Background China is the largest global down production and export country, and the quality of down products is directly related to market competitiveness and consumer experience. Therefore, the identification and classification of down is particularly important in the links of production and quality inspection. Traditional down identification and classification relies primarily on visual assessment by experienced human inspectors. Usually, an inspector seals down in a transparent sealed bag, and observes key characteristics such as appearance, fiber structure, bulk, impurity distribution and the like by naked eyes, so as to judge the quality of down and distinguish fresh down from recycled down. Although intuitive and feasible, this approach is limited by the bias of subjective judgment, the influence of ambient light, and the experience level of inspectors, and is difficult to achieve standardized and efficient quality control. Due to the experience difference of inspectors and the influence of subjective factors, the manual visual inspection method is difficult to maintain stable accuracy in the long-term inspection process, and is easy to be interfered by factors such as fatigue, inconsistent light conditions and individual judgment standards. In order to improve the efficiency and consistency of down identification, a full-automatic down identification method based on deep learning gradually becomes a research hot spot. Currently, most research is focused on the differentiation of down species, such as duck and goose, and some effort has been made in such classification tasks. However, in the aspect of identifying fresh velvet and recovered velvet, the accuracy of the existing algorithm is still low because the difference of the fresh velvet and the recovered velvet in appearance characteristics is small, and the high-precision detection requirement of industrial production is difficult to meet. Therefore, aiming at the prior art, the invention provides a down identification method and a down identification system based on deep learning. Disclosure of Invention Aiming at the defects of the prior art, the invention provides a down feather identification method and a down feather identification system based on deep learning, which can accurately distinguish fresh down feather from recovered down feather, and change the traditional mode of manually identifying by a inspector relying on abundant experience into a full-automatic detection scheme which can be easily operated by common staff. By introducing the deep learning technology, intelligent analysis and efficient discrimination of the down quality are realized, the accuracy and consistency of detection are improved, the labor cost is greatly reduced, and a more scientific, efficient and popularized intelligent detection solution is provided for the industry. In order to achieve the above purpose, the present invention adopts the following technical scheme: A down feather identification method based on deep learning comprises the following steps: S1, acquiring an original down feather image, and preprocessing the acquired original down feather image to obtain a processed down feather feature map; S2, inputting the obtained feature map into an improved residual network WT-ResNet model, wherein the WT-ResNet model at least comprises a WT-ResNet layer, and outputting a final feature map comprising deep semantic information after the WT-ResNet layer processes the input feature map; S3, inputting the output final feature map into a classification activation layer, processing through an activation function to obtain probability values of down image classification, and judging that the down image is fresh down or recycled down based on the probability values. Further, the step S2 specifically includes: S21, performing two-dimensional discrete wavelet transformation on the input feature map to generate four frequency components LL1, LH1, HL1 and HH1 of a second layer; s22, performing two-dimensional discrete wavelet transformation on the LL1 again to obtain four frequency components LL2, LH2, HL2 and HH2 of a third layer; S23, performing depth convolution on LL2, LH2, HL2 and HH2 to obtain four processed frequency components LL2', LH2', HL2', HH2'; S24, performing inverse two-dimensional discrete wavelet transform on LL2', LH2', HL2', HH2' to obtain a reconstructed third layer characteristic diagram F3; S25, fusing the component LL1 of the second layer with the characteristic diagram F3 of the third layer, and carrying out depth convolution and inverse two-dimensional discrete wavelet transformation on the fused four frequency components to obtain a reconstructed characteristic diagram F2 of the second layer; s26, performing de