CN-121999212-A - Intelligent clothing quality inspection method and system integrating image semantic segmentation
Abstract
The application relates to the technical field of artificial intelligence and computer vision, discloses an intelligent clothing quality inspection method and system integrating image semantic segmentation, and aims to solve the problems that the existing western style trousers quality inspection relies on manpower, is low in efficiency, is non-uniform in standard and is weak in fine defect recognition capability. The method comprises the steps of obtaining a western style trousers multi-view high-resolution image, carrying out registration and gesture normalization based on structure priori knowledge, generating a pixel-level semantic label graph through a multi-scale context-aware semantic segmentation network, and further realizing defect positioning, type discrimination, severity quantification and quality grade judgment. The system comprises a multi-view image acquisition unit, a preprocessing unit, a semantic segmentation unit, a geometric and texture feature extraction unit, a fusion analysis unit and a quality judgment unit. According to the application, through the collaborative analysis of geometric constraint and texture anomaly, the defect identification precision and robustness are obviously improved, and the full-automatic, high-efficiency and standardized intelligent quality inspection is realized.
Inventors
- YAN MEILI
- Luo Xueyi
- LI DEXING
Assignees
- 酷特智能(山东)有限公司
Dates
- Publication Date
- 20260508
- Application Date
- 20251231
Claims (10)
- 1. An intelligent clothing quality inspection method integrating image semantic segmentation is characterized by comprising the following steps: acquiring multi-view high-resolution color image data of western style trousers to be detected in a standard illumination environment, wherein the multi-view comprises five fixed view angles of a front face, a back face, a left side face, a right side face and a local close-up; Performing image registration and posture normalization processing on the multi-view high-resolution color image data based on a preset western style trousers structure priori knowledge base to generate a standardized western style trousers image set with a unified coordinate system and a unified scale; Inputting the standardized western style trousers image set into a trained multi-scale context-aware semantic segmentation network, and outputting a semantic category label graph to which each pixel point belongs; extracting boundary contours of all key component areas based on the semantic class label graph, and calculating geometric form parameters of the boundary contours; Synchronously, carrying out frequency domain-space domain combined feature extraction on the standardized western style trousers image set to generate a texture abnormal response graph, wherein the texture abnormal response graph is used for representing a region deviating from a normal fabric texture mode in the image; Carrying out space alignment fusion on the geometric form parameters and the texture anomaly response map, and constructing a composite defect feature map fusing geometric constraints and texture anomalies; performing defect positioning, type discrimination and severity quantification based on the composite defect feature map; and comprehensively scoring the quantized defect information according to a preset quality judgment rule base to generate a final quality inspection conclusion, wherein the quality inspection conclusion comprises three grades of qualification, repair and rejection.
- 2. The intelligent clothing quality inspection method of fused image semantic segmentation according to claim 1, wherein inputting the standardized western style trousers image set to a trained multi-scale context-aware semantic segmentation network, outputting a semantic category label graph to which each pixel belongs, comprises: The standardized western-style trousers image set is processed through a multi-scale context-aware semantic segmentation network of an encoder-decoder framework, wherein an encoder part is composed of a main convolution neural network and used for extracting low-level edge features, medium-level texture features and high-level semantic features layer by layer, a decoder part comprises three parallel cavity convolution branches, the cavity rates of the three parallel cavity convolution branches are respectively set to be two, four and eight and used for capturing context information under different receptive fields, the outputs of the three branches are spliced after being weighted by channel attention, the outputs are up-sampled to the resolution of an original image through transposed convolution, and finally pixel-level semantic probability distribution is output through a Softmax function to generate the semantic category label graph.
- 3. The intelligent clothing quality inspection method based on the fusion image semantic segmentation according to claim 2, wherein the image registration and posture normalization processing are performed on the multi-view high-resolution color image data based on a preset western style trousers structure priori knowledge base, and the method comprises the following steps: The method comprises the steps of detecting a vertical reference line in an image through Hough transformation to determine the main axis direction of western trousers, rotating the image by taking a waist center point as an origin until the main axis is parallel to a longitudinal axis of the image, and scaling the image in an equal ratio according to the proportional relation between waistline and trousers length to map the physical dimensions of all samples to a uniform pixel scale.
- 4. The intelligent clothing quality inspection method of fused image semantic segmentation according to claim 3, wherein extracting boundary contours of each key component region and calculating geometric parameters thereof based on the semantic category label graph comprises: The method comprises the steps of carrying out polygonal fitting on the outer contours of a front piece and a rear piece area, extracting side seam segments, calculating the deviation of an included angle between the side seam segments and an ideal vertical line to be used as edge straightness, respectively fitting a minimum circumscribed rectangle on a left bag cover area and a right bag cover area, calculating the included angle between a connecting line of two rectangular central points and a horizontal line to be used as a bag cover inclination angle, circumferentially sampling one hundred points on a waist head area, fitting a cubic spline curve, calculating the standard deviation of curvature change of the waist head area to be used as curvature of the waist head radian, carrying out sub-pixel edge detection on the seam at the juncture of a front fly and a back fly, measuring seam width at two sides, and calculating the maximum value of the absolute value of the difference value to be used as seam width consistency.
- 5. The intelligent clothing quality inspection method of fused image semantic segmentation according to claim 4, wherein the performing frequency domain-spatial domain joint feature extraction on the standardized western style trousers image set to generate a texture anomaly response map comprises: Converting the standardized western trousers image into a YCbCr color space, taking a Y channel to perform two-dimensional discrete cosine transform, reserving the first sixteen rows and sixteen columns of a low-frequency coefficient matrix, reconstructing a low-frequency component image, differentiating an original Y channel image and the low-frequency component image to obtain a high-frequency residual image, applying a local binary pattern operator to the high-frequency residual image to generate a local texture response diagram, performing Gaussian pyramid decomposition on the local texture response diagram, and taking a third layer downsampled image as the texture anomaly response diagram.
- 6. The intelligent clothing quality inspection method of fused image semantic segmentation according to claim 5, wherein the spatial alignment fusion of the geometric form parameter and the texture anomaly response map is performed to construct a composite defect feature map fusing geometric constraints and texture anomalies, comprising: the geometrical morphology parameters are mapped back to an original image coordinate system in a thermodynamic diagram mode, wherein the thermodynamic value of a corresponding pixel is higher as the parameters deviate from a standard value, the geometrical thermodynamic diagram and a texture anomaly response diagram are multiplied pixel by pixel to obtain the composite defect feature diagram, the composite defect feature diagram is subjected to threshold segmentation, connected areas are extracted, and each connected area corresponds to one candidate defect example.
- 7. The intelligent clothing quality inspection method of fused image semantic segmentation according to claim 6, wherein performing defect localization, type discrimination and severity quantification based on the composite defect feature map comprises: Extracting feature vectors from each candidate defect example, wherein the feature vectors comprise area, length-width ratio, average texture response intensity, maximum geometric deviation value, color mean square error and edge gradient entropy, inputting the feature vectors into a multi-layer perceptron classifier, wherein the multi-layer perceptron classifier comprises two hidden layers, the node number is sixty-four and thirty-two, the node number of an output layer is nine, nine defect types are respectively corresponding to the multi-layer perceptron classifier to realize defect type discrimination, and calculating severity according to the defect types by adopting corresponding quantization rules, wherein the quantization rules comprise that the severity is equal to the ratio of the defect length divided by the standard needle distance for a jump needle and a broken line, the severity is equal to the defect area divided by the percentage of the total area of a part, the severity is equal to the delta E value of a defect area and an adjacent normal area in a CIELAB color space for color difference, and the severity is equal to the average response intensity of a texture abnormal response map in the area for a crease and an ironing mark.
- 8. The intelligent clothing quality inspection method of fusion image semantic segmentation according to claim 7, wherein the step of comprehensively scoring the quantized defect information according to a preset quality judgment rule base to generate a final quality inspection conclusion comprises the following steps: The method comprises the steps of judging whether the integrated defect index is qualified when the integrated defect index is smaller than a zero point III and no single type defect severity exceeds a threshold value one, judging whether the integrated defect index is between the zero point III and the zero point seven, or judging whether the single type defect severity exceeds a threshold value two when the integrated defect index is larger than the zero point seven, or judging whether the single type defect severity exceeds the threshold value two when the integrated defect index is larger than the zero point seven, wherein the threshold value one is set to be zero point five, and the threshold value two is set to be zero point eight.
- 9. An intelligent clothing quality inspection system integrating image semantic segmentation, which is characterized by comprising: The multi-view image acquisition unit is used for shooting five high-resolution images with fixed view angles of western style trousers to be detected in the standard light source box, wherein the light source box adopts a D65 standard sunlight simulation light source, and the illuminance uniformity is not lower than ninety five percent; the image preprocessing unit is used for receiving the multi-view images, performing white balance correction, denoising filtering and contrast enhancement operations, performing image registration and posture normalization based on a western style trousers structure priori knowledge base and outputting a standardized western style trousers image set; the semantic segmentation unit is used for loading the trained multi-scale context-aware semantic segmentation network, carrying out pixel-level semantic annotation on the standardized western-style trousers image set and outputting a semantic category label graph; The geometric feature extraction unit is used for extracting boundary contours of all key components according to the semantic category label graph, and calculating geometric form parameters such as edge straightness, symmetry axis offset, seam width consistency, bag cover inclination angle, waist head radian curvature and the like; the texture anomaly detection unit is used for carrying out frequency domain-space domain combined feature extraction on the standardized western trousers image set to generate a texture anomaly response graph; The feature fusion unit is used for carrying out space alignment fusion on the geometric form parameters and the texture anomaly response map in a thermodynamic diagram form to generate a composite defect feature map; The defect analysis unit is used for carrying out connected region analysis on the composite defect characteristic diagram, extracting candidate defect examples, and carrying out defect type discrimination and severity quantification through a multi-layer perceptron classifier; And the quality judgment unit is used for comprehensively scoring the quantized defect information according to a preset quality judgment rule base, generating a qualified, reworked or scrapped quality inspection conclusion and outputting a structured quality inspection report.
- 10. The intelligent clothing quality inspection system with fused image semantic segmentation according to claim 9, wherein the semantic segmentation unit is configured to: The standardized western-style trousers image set is processed through a multi-scale context-aware semantic segmentation network of an encoder-decoder framework, wherein an encoder part is composed of a main convolution neural network and used for extracting low-level edge features, medium-level texture features and high-level semantic features layer by layer, a decoder part comprises three parallel cavity convolution branches, the cavity rates of the three parallel cavity convolution branches are respectively set to be two, four and eight and used for capturing context information under different receptive fields, the outputs of the three branches are spliced after being weighted by channel attention, the outputs are up-sampled to the resolution of an original image through transposed convolution, and finally pixel-level semantic probability distribution is output through a Softmax function to generate the semantic category label graph.
Description
Intelligent clothing quality inspection method and system integrating image semantic segmentation Technical Field The invention relates to the technical field of artificial intelligence and computer vision, in particular to an intelligent clothing quality inspection method and system integrating image semantic segmentation. Background Along with the deep fusion of intelligent manufacturing and computer vision technology, the quality inspection of clothing industry is accelerated to be evolved to an automatic and intelligent direction. Traditional clothing quality inspection mainly relies on manual visual inspection, and has the inherent defects of low efficiency, strong subjectivity, high omission rate and the like. In recent years, an automatic quality inspection method based on image processing is gradually introduced into a production line, and defect identification is performed by extracting texture, suture trend or contour features of cloth. However, such methods focus on general fabric surface flaw detection, lack understanding ability of garment structure semantics, and are difficult to distinguish normal process features (such as pocket sutures and waistband folds) from real defects (such as jump needles, holes and misplacement), and especially in complex formats such as western trousers and the like with multi-component splicing, symmetrical structures and fine cutting features, the misjudgment rate is remarkably increased. In addition, the existing system generally performs image segmentation and defect judgment and cleavage treatment, semantic information is not effectively integrated into a quality inspection decision flow, and sensitivity to key quality problems such as local deformation, slight folds or assembly deviation is insufficient. Wherein, intelligent quality inspection facing to trousers of western-style normal wear trousers faces higher precision and structure perception requirement. Western-style trousers are taken as typical high-process standard clothes, and the appearance quality of the Western-style trousers is highly dependent on structural indexes such as front-back sheet symmetry, side seam straightness, pocket position consistency, waist flatness and the like. These indices are not only related to pixel level defects, but also to spatial topological relationships between components and semantic consistency. Thus, there is a need for a technical path that enables understanding of garment component semantics and thus enabling refined quality assessment. In the prior art, part of schemes attempt to introduce a deep learning model to classify end-to-end defects, but the black box characteristics of the deep learning model cause poor interpretability, training data are seriously dependent on a large number of labeling samples, and high-quality labeling data are difficult to obtain on sub-classification products such as western trousers, and other methods adopt a traditional image segmentation network (such as U-Net) to divide areas, but the segmentation result is not aligned with a quality inspection rule semantically, so that logic reasoning cannot be performed on specific quality inspection items such as whether trouser legs are equal and whether a rear bag is askew or not. More importantly, the current systems generally lack the ability to accurately parse the semantics of critical structural components of western style pants (e.g., waist, front pleats, back darts, side seams, foot openings), resulting in an inability to stably extract structured features that can be used for quality judgment in the face of slight deformation or illumination disturbances. The problems mentioned above make it difficult for the existing intelligent quality inspection scheme to consider accuracy, robustness and deployability in the context of western style trousers and other high-demand scenes, and a novel intelligent clothing quality inspection method integrating image semantic segmentation and structured quality inspection logic is needed. Disclosure of Invention The invention provides an intelligent clothing quality inspection method and system integrating image semantic segmentation, and aims to solve the technical problems of dependence on manual visual judgment, low efficiency, non-uniform standards and insufficient identification capability on fine defects in the existing western style trousers appearance quality detection process. According to the method, through constructing a multiscale semantic segmentation model oriented to western style trousers structural features and combining a high-precision visual imaging and geometric constraint guiding mechanism, full-automatic, high-robustness and high-precision identification and classification of western style trousers surface flaws, sewing deviations and abnormal patterns are realized. According to one aspect of the present invention, there is provided an intelligent clothing quality inspection method of fused image semantic segmentation,