CN-121982401-A - Automatic fritillary bulb identification system and method based on hierarchical deep learning
Abstract
The invention discloses an automatic fritillary bulb identification system and method based on hierarchical deep learning, and relates to the technical field of intelligent detection of traditional Chinese medicinal materials. The system comprises a hardware integration and image acquisition module, an image processing and target positioning module and a hierarchical identification algorithm module. The method comprises the steps of classifying and identifying the pairs of the scallops and the flat shellfish with lower similarity by adopting ConvNeXt-; tiny network, classifying and identifying the pairs of the scallops and the flat shellfish with higher similarity by adopting a double-branch ConvNeXt model, respectively extracting local micro texture and integral macro morphology features from texture branches and morphological branches constructed in parallel, and optimizing by adopting a joint loss function of a fusion tuple marginal penalty term so as to enhance the discrimination capability between the classes. The invention realizes the full-flow automation from sample feeding to identification result output, has high identification precision and high speed, and is suitable for quick nondestructive batch detection in the traditional Chinese medicine circulation link.
Inventors
- CHEN TING
- REN JUAN
- ZHANG MEI
- LIU XIAONA
- MA JUNJUN
Assignees
- 北京市科学技术研究院分析测试研究所(北京市理化分析测试中心)
Dates
- Publication Date
- 20260505
- Application Date
- 20260126
Claims (10)
- 1. An automatic fritillary bulb identification system based on hierarchical deep learning is characterized by comprising: The hardware integration and image acquisition module is used for realizing automatic feeding and transmission of fritillary samples and acquiring sample images under standard illumination conditions; the image processing and target positioning module is used for processing the acquired sample image, extracting and cutting out a normalized fritillary bulb target area image; the classification identification algorithm module is used for inputting the fritillary object region image into a corresponding deep learning model for classification identification according to the preset fritillary category pair similarity information.
- 2. The fritillary automatic identification system based on hierarchical deep learning according to claim 1, wherein the hierarchical identification algorithm module adopts a hierarchical identification strategy; the hierarchical authentication algorithm module comprises: the basic identification unit is used for classifying and identifying the pine nuts and flat shellfish pairs with low similarity by adopting ConvNeXt & # x2011; The enhancement identification unit is used for classifying and identifying the pairs of the shellfish and the Italian shellfish with higher similarity by adopting a double-branch ConvNeXt model.
- 3. The fritillary automatic identification system based on hierarchical deep learning of claim 2, wherein the dual-branch ConvNeXt model comprises: texture branches for extracting local micro texture features of fritillary from input features; form branches for extracting whole macroscopic contour features of fritillary from input features; and the feature fusion and classification unit is used for fusing the texture feature vector output by the texture branch with the morphological feature vector output by the morphological branch, classifying the texture feature vector through a fully-connected network and outputting class probability.
- 4. The fritillary automatic identification system based on hierarchical deep learning of claim 3, wherein the texture branch sequentially comprises a 7×7 convolution layer, a ReLU activation function, an adaptive average pooling layer and a layer normalization layer, the texture branch outputs 256-dimensional texture feature vectors, the morphology branch sequentially comprises a 3×3 maximum pooling layer, an adaptive average pooling layer and a layer normalization layer, and the morphology branch outputs 768-dimensional morphology feature vectors.
- 5. The fritillary automatic identification system based on hierarchical deep learning according to claim 4, wherein the feature fusion and classification unit first splices the 256-dimensional texture feature vector and the 768-dimensional morphology feature vector into 1024-dimensional fusion features along a channel dimension, and then outputs class probabilities through a classifier comprising two fully connected layers.
- 6. The fritillary automatic identification system based on hierarchical deep learning according to any one of claims 2 to 5, wherein the training of the dual-branch ConvNeXt model in the enhanced identification unit uses a joint loss function, the joint loss function being: , wherein, For standard cross entropy loss, the calculation formula is: ; In the formula, For the one-time thermal encoding of the authentic label, For model pair category Is used to determine the prediction probability of (1), For the tuple boundary penalty term designed for the furnace shell and Bei Leibie pair, the following is defined: ; In the formula, And Respectively representing the prediction probability of the model to the furnace shell and the Yibei, As a margin threshold value, the threshold value, Is a balance coefficient.
- 7. An automatic fritillary bulb identification method based on hierarchical deep learning, which is characterized in that the system as claimed in any one of claims 1 to 6 is adopted, and the method comprises the following steps: s1, automatically feeding and transmitting a fritillary sample through the hardware integration and image acquisition module, and acquiring a sample image under standard illumination conditions; S2, processing the sample image through the image processing and target positioning module, and extracting and cutting out a normalized fritillary bulb target area image; And S3, classifying and identifying the fritillary object region image by selecting a corresponding deep learning model through the classification and identification algorithm module according to the preset fritillary category pair similarity information, and outputting a result.
- 8. The automatic fritillary bulb identification method based on hierarchical deep learning according to claim 7, wherein the selecting of the corresponding deep learning model in the step S3 includes invoking ConvNeXt ‑ if fritillary bulbs to be identified belong to the fritillary bulb and shellfish pairs with low similarity, performing classification identification by a Tiny network, and invoking a double branch ConvNeXt model if fritillary bulbs to be identified belong to the fritillary bulb and shellfish pairs with high similarity.
- 9. The automatic fritillary bulb identification method based on hierarchical deep learning according to claim 8 is characterized in that the process of calling a double-branch ConvNeXt model for classification identification comprises the steps of extracting a deep feature map of an input image through a main network to obtain input features, extracting local micro-texture features of fritillary bulbs from the input features through parallel texture branches, extracting whole macro-contour features of fritillary bulbs from the input features through parallel morphological branches, fusing the texture feature vectors and the morphological feature vectors, classifying through a fully connected network, and outputting class probabilities.
- 10. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method according to any one of claims 7 to 9.
Description
Automatic fritillary bulb identification system and method based on hierarchical deep learning Technical Field The invention relates to the technical field of computer vision and quality detection of traditional Chinese medicinal materials, in particular to a fritillary bulb automatic identification system and method integrating automatic hardware and a hierarchical deep learning algorithm, which are used for realizing rapid, nondestructive and high-precision identification of a fritillary bulb genuine product and a counterfeit product. Background Fritillary bulb is a rare traditional Chinese medicinal material, but the appearance and morphology of the genuine products (such as scallop and fritillary bulb) are highly similar to those of the counterfeit products (such as fritillary bulb and scallop), and the traditional identification method (such as experience identification, microscopic identification and physicochemical identification) has the defects of strong subjectivity, complex operation, expensive damaged samples or equipment and the like, and is difficult to meet the requirements of quick, nondestructive and large-batch screening of market circulation links. The image recognition technology based on deep learning provides a new way for Chinese medicinal material identification. The prior art, such as an improved YOLO series model, can realize automatic detection of fritillary bulbs, but has limited precision in fine granularity classification of highly similar categories, and single-branch convolutional neural networks (such as CNN and ResNet) perform well when the difference between the categories is obvious, but have insufficient discrimination capability for extremely similar categories such as 'furnace shellfish & # x2011 and Yibei'. Furthermore, existing research focuses on algorithms themselves, lacks complete system solutions integrated with automation hardware, and is difficult to apply directly to actual production environments. Therefore, a systematic solution integrating automatic sample processing, high-quality image acquisition and high-precision intelligent identification is urgently needed to cope with the challenges of rapid nondestructive identification of traditional Chinese medicinal materials with similar appearance such as fritillary. Disclosure of Invention In view of the above, the invention aims to provide an automatic fritillary bulb identification system and method based on hierarchical deep learning, so as to solve the technical problems of lack of a full-flow automatic scheme and insufficient identification precision for highly similar categories in the prior art. In order to achieve the above purpose, the invention adopts the following technical scheme: Provided is a fritillary bulb automatic identification system based on hierarchical deep learning, comprising: The hardware integration and image acquisition module is used for realizing automatic feeding and transmission of fritillary samples and acquiring sample images under standard illumination conditions; the image processing and target positioning module is used for processing the acquired sample image, extracting and cutting out a normalized fritillary bulb target area image; the classification identification algorithm module is used for inputting the fritillary object region image into a corresponding deep learning model for classification identification according to the preset fritillary category pair similarity information. The hierarchical authentication algorithm module adopts a hierarchical authentication strategy; the hierarchical authentication algorithm module comprises: the basic identification unit is used for classifying and identifying the pine nuts and flat shellfish pairs with low similarity by adopting ConvNeXt & # x2011; The enhancement identification unit is used for classifying and identifying the pairs of the shellfish and the Italian shellfish with higher similarity by adopting a double-branch ConvNeXt model. The dual-branch ConvNeXt model includes: texture branches for extracting local micro texture features of fritillary from input features; form branches for extracting whole macroscopic contour features of fritillary from input features; and the feature fusion and classification unit is used for fusing the texture feature vector output by the texture branch with the morphological feature vector output by the morphological branch, classifying the texture feature vector through a fully-connected network and outputting class probability. The texture branch sequentially comprises a 7 multiplied by 7 convolution layer, a ReLU activation function, a self-adaptive average pooling layer and a layer normalization layer, the texture branch outputs 256-dimensional texture feature vectors, the morphology branch sequentially comprises a 3 multiplied by 3 maximum pooling layer, a self-adaptive average pooling layer and a layer normalization layer, and the morphology branch outputs 768-dimensional morphol