CN-121545139-B - Artificial intelligence bunchy yarn fabric visual detection method and system
Abstract
The application relates to the field of artificial intelligence, discloses an artificial intelligence bunchy yarn fabric visual detection method and system, and aims to solve the problems of low identification rate of micro defects, high false detection omission rate, poor environmental adaptability and delayed model update in the prior art under a complex texture background. The method comprises the steps of cooperatively collecting fabric images through a high-resolution linear array camera and a multispectral programmable light source, constructing a five-layer multiscale texture feature pyramid network, fusing local gradients and global semantic features, deploying a cloud dynamic defect discrimination model, combining defect priori knowledge to guide and realize pixel level segmentation and classification, and establishing a self-adaptive parameter optimization engine and an incremental learning mechanism.
Inventors
- ZHAN LIANG
Assignees
- 虹纬盐城纺织有限公司
Dates
- Publication Date
- 20260508
- Application Date
- 20251024
Claims (10)
- 1. An artificial intelligence bunchy yarn fabric visual inspection method is characterized by comprising the following steps: The method comprises the steps that a linear array scanning imaging device arranged right above a fabric conveying path is used for carrying out line-by-line optical scanning on the surface of a continuously advancing slub yarn fabric at a sampling frequency of not lower than two thousand lines per second to obtain an original gray image sequence, and the optical resolution of the linear array scanning imaging device is not lower than twelve pixel points per millimeter; The method comprises the steps that through programmable multispectral lighting units arranged on two sides of an imaging area, the luminous intensity and the incidence angle of each lighting channel are dynamically adjusted according to the current advancing speed and the surface reflection characteristic of a fabric, and the programmable multispectral lighting units comprise a main lighting channel, a side-glancing lighting channel and a backlight compensation channel; the method comprises the steps that through a multi-scale texture feature pyramid network constructed in an embedded image processing unit, step-by-step downsampling and feature extraction are carried out on an original gray image sequence, the multi-scale texture feature pyramid network comprises a five-layer convolution structure, a first layer adopts a three-by-three convolution kernel to extract local edges and gradient responses, a second layer adopts a five-by-five convolution kernel to capture medium-scale texture periodicity, a third layer adopts a seven-by-seven convolution kernel to model a macroscopic morphological contour of a bamboo joint region, a fourth layer adopts a cavity convolution structure, and a fifth layer adopts a deformable convolution structure; receiving a fusion feature map output by the multi-scale texture feature pyramid network through a dynamic defect discrimination model deployed in a cloud reasoning server, and executing dual tasks of pixel-level defect segmentation and regional-level defect classification, wherein the dynamic defect discrimination model adopts an encoder-decoder architecture, and a decoder part introduces a defect type priori knowledge guide module; reversely adjusting the exposure time of the linear array scanning imaging device, the light intensity ratio of the programmable multispectral lighting unit and the expansion rate parameter of the cavity convolution in the multiscale texture feature pyramid network according to the confidence score distribution and the false detection sample feature output by the dynamic defect judging model through a self-adaptive parameter optimizing engine established in the central control unit; After a false detection or omission detection sample is confirmed by manual rechecking, automatically adding the corresponding original image block and marking information into a training sample library, and triggering local weight update of the dynamic defect discrimination model by constructing a defect sample increment learning mechanism; The method comprises the steps of obtaining encoder pulse signals of a fabric conveyor belt in real time through designing a fabric motion state synchronous compensation module, calculating current travelling speed and acceleration, injecting speed information as a time stamp synchronous signal into a trigger control circuit of the linear array scanning imaging device, and inputting the acceleration information as a motion blur correction factor into an image preprocessing unit; Three linear array scanning imaging devices are arranged in parallel in the width direction of the fabric by arranging a multi-camera collaborative calibration and parallax correction subsystem, parallax fields are calculated in real time according to the characteristic point matching result of the overlapping area, and pixel level alignment is carried out on adjacent camera images by adopting a bilinear interpolation algorithm.
- 2. The visual inspection method of an artificial intelligence bunchy yarn fabric according to claim 1, wherein the dynamic adjustment of the luminous intensity and the incident angle of each illumination channel according to the current travelling speed and the surface reflection characteristic of the fabric by the programmable multispectral illumination units arranged at both sides of the imaging area comprises: the main illumination channel adopts a diffuse reflection white light source to provide uniform basic illumination, the side-glancing illumination channel adopts a low-angle oblique light source to enhance the shadow contrast of a micro concave-convex structure on the surface of the fabric, and the backlight compensation channel adopts a transmission light source to inhibit the interference of a fabric bottom structure on the identification of surface defects; The driving current of the three-channel light source is closed-loop regulated by the central controller according to the image contrast feedback value acquired in real time, the gray histogram standard deviation of the current image block is calculated after ten lines of images are processed, if the standard deviation is lower than thirty, the intensity of the side-glancing illumination channel is increased by twenty percent and the intensity of the main illumination channel is reduced by ten percent, and if the standard deviation is higher than eighty, the intensity of the side-glancing illumination channel is reduced by thirty percent and the intensity of the backlight compensation channel is increased by fifteen percent.
- 3. The method according to claim 2, wherein the step-wise downsampling and feature extraction of the original gray image sequence by a multi-scale texture feature pyramid network built in an embedded image processing unit comprises: the original gray level image is cut into image blocks with the width of two thousand pixels and the height of one hundred twenty-eight rows to be used as network input; the first layer convolution adopts sixty four three-by-three convolution kernels, the step length is one, the filling is one, and a sixty four-way feature map is output; The second layer convolution adopts one hundred twenty eight five-by-five convolution kernels, the step length is two, the filling is two, and a one hundred twenty eight channel feature map is output; The third layer convolution adopts two hundred fifty-six seven-seven convolution kernels, the step length is two, the filling is three, and a two hundred fifty-six channel characteristic diagram is output; The fourth layer adopts a cavity convolution structure, the convolution kernel size is three times three, the cavity rate is two, and a five hundred twelve channel characteristic diagram is output; The fifth layer adopts a deformable convolution structure, the convolution kernel size is three times three, the offset is predicted by the additional convolution branches, and a one thousand twenty four-way feature map is output; And the feature images output by each layer are connected with the feature images sampled on the previous layer in a jumping way to carry out channel dimension splicing, and finally the multi-scale fusion feature images with the size of one eighth of the input image and the channel number of one thousand zero twenty four are output.
- 4. The method for visual inspection of an artificial intelligence bunchy yarn fabric according to claim 3, wherein the dual tasks of pixel-level defect segmentation and region-level defect classification are performed by receiving the fusion feature map output by the multi-scale texture feature pyramid network through a dynamic defect discrimination model deployed in a cloud reasoning server, and the method comprises the following steps: the encoder part directly receives the fusion characteristic map of one thousand twenty-four channels; The decoder part comprises a four-layer transposition convolution structure, wherein each layer enlarges the size of the characteristic diagram by two times, and the number of channels is reduced by half in sequence; Introducing a spatial attention mechanism after each layer of transpose convolution, and calculating the attention weight of each position of the feature map; The defect type priori knowledge guiding module intervenes in a second layer of the decoder, and dynamically adjusts the characteristic weights of all layers of the decoder according to the spatial distribution templates of the four defects of abnormal bamboo joint morphology, yarn breakage, stain contamination and uneven density in the historical defect sample library; The final output layer adopts two parallel branches, the segmentation branches output a single-channel probability map, the classification branches output a four-channel probability map, binarization is carried out by setting a probability threshold value zero point five, a connected region is extracted as a defect candidate, a centroid coordinate is calculated as a defect position, the largest probability in the classification branches is taken as a defect type label, and the average probability in the region in the segmentation branches is taken as a confidence score.
- 5. The method according to claim 4, wherein the reversely adjusting the exposure time of the linear array scanning imaging device, the light intensity ratio of the programmable multispectral lighting unit, and the expansion rate parameters of the cavity convolution in the multiscale texture feature pyramid network according to the confidence score distribution and the false detection sample feature output by the dynamic defect discrimination model by the self-adaptive parameter optimization engine built in the central control unit comprises: The central control unit continuously receives the detection result returned by the cloud, establishes a confidence score histogram and counts the proportion of samples with low confidence; After every kilometer of fabric length is processed in an accumulated way, if the proportion of the low confidence coefficient samples exceeds five percent, triggering a parameter optimization flow; Extracting all original image blocks and feature images corresponding to samples with confidence coefficient lower than zero III, and calculating the distribution difference of the original image blocks and feature images with high confidence coefficient samples in a feature space; If the difference is mainly concentrated in the high-frequency gradient region, increasing the side-glancing illumination intensity by ten percent and prolonging the exposure time by five milliseconds; if the difference is concentrated in the low-frequency texture area, the driving current balance of each LED unit of the main illumination channel is adjusted; If the difference is expressed as insufficient scale sensitivity, the fourth layer void ratio is adjusted from two to three; the parameter adjustment amplitude is calculated by a gradient descent algorithm, the loss function is defined as negative log likelihood of a low confidence sample proportion, the learning rate initial value is zero one, and each round of optimization iterates ten times.
- 6. The visual inspection method of an artificial intelligence bunchy yarn fabric according to claim 5, wherein by constructing a defect sample increment learning mechanism, after a false inspection or omission sample is confirmed by manual review, automatically adding the corresponding original image block and labeling information into a training sample library, and triggering the local weight update of the dynamic defect discrimination model, comprising: The sample collection agent automatically intercepts an original image block corresponding to the sample, the size is five hundred and twelve times five hundred and twelve pixels, and the feature vector output by the sample collection agent at the fifth layer of the multi-scale texture feature pyramid network is extracted; the feature similarity calculation engine calculates cosine distances between the feature vector and each prototype vector in the history sample library; If the cosine distance is smaller than zero, judging that the sample is a high similarity sample, and triggering local weight update; the local gradient calculation unit constructs a weighted loss function, the weight is the square of the similarity score, the backward propagation is only carried out on the neuron connection with the similarity higher than zero seven, the learning rate is set to be one tenth of the global training, and the iteration times are fixed to be five times; The model parameter compressor adopts channel pruning and weight quantization technology to compress the updated model to less than twenty-eight percent of the original size.
- 7. The method for visual inspection of an artificial intelligence slub yarn fabric according to claim 6, wherein the step of obtaining encoder pulse signals of a fabric conveyor in real time by designing a fabric motion state synchronous compensation module, and calculating the current traveling speed and acceleration comprises the steps of: The high-precision rotary encoder is arranged at the end of the fabric conveying roller, and the resolution is not lower than five kilopulses per revolution; the time synchronization controller reads the pulse count of the encoder with a period of one microsecond, and calculates the instantaneous speed through difference; The acceleration is calculated through secondary difference; the speed value is used for generating a row trigger pulse, so that the image acquisition interval of each row is strictly proportional to the displacement of the fabric; The acceleration value is used as an estimation parameter of a motion blur kernel, the blur kernel model is linear motion blur, and the length direction is consistent with the fabric travelling direction; The deconvolution algorithm adopts a wiener filter to sharpen the current line image in real time.
- 8. The method for visual inspection of an artificial intelligence bunchy yarn fabric according to claim 7, wherein three linear array scanning imaging devices are arranged in parallel in the fabric width direction by disposing a multi-camera collaborative calibration and parallax correction subsystem, comprising: The transverse distance between the three cameras is eight hundred millimeters, the visual field width of a single camera is nine hundred millimeters, and the overlapping area is one hundred thirty-five millimeters; Collecting at least fifty calibration images of different poses in an off-line calibration stage, and calculating an internal reference matrix and distortion coefficients of each camera by adopting a Zhang Zhengyou calibration method; extracting scale invariant feature points of overlapping areas of adjacent cameras after acquiring one thousand lines of images in online detection, and removing mismatching points by adopting a random sampling consistency algorithm; Calculating a parallax field according to the interior point matching pair, wherein the model is a two-dimensional polynomial curved surface, the order is three, and the coefficient is fitted by a least square method; the bilinear interpolation algorithm carries out pixel remapping on the right camera image according to the parallax field, the joint seam adopts weighted average fusion, the weight decays in Gaussian distribution along with the center line of the distance seam, and the standard deviation is ten pixels.
- 9. A detection system applying the artificial intelligence bunchy yarn fabric visual detection method of any one of claims 1 to 8, characterized by comprising: the linear array scanning imaging device is used for realizing undistorted progressive sampling of the surface of the continuously moving fabric by matching with the high-speed electronic shutter and the global exposure control circuit; a programmable multispectral lighting unit for providing an illumination field capable of independently adjusting spectral components and spatial distribution, comprising a main illumination channel, a side-glancing illumination channel and a backlight compensation channel; The embedded image processing unit is used for executing image preprocessing, feature extraction and preliminary defect screening tasks, and a hardware platform of the embedded image processing unit adopts a multi-core ARM processor and field programmable gate array collaborative architecture; the cloud reasoning server is used for bearing a complete reasoning process and parameter updating mechanism of the dynamic defect judging model, and a computing core of the cloud reasoning server adopts a graphic processor cluster architecture; the central control unit is used for coordinating the working time sequence and parameter configuration of each subsystem, and the core of the central control unit is an industrial programmable logic controller; The defect sample increment learning module is used for realizing online continuous optimization of the detection model, and the software architecture of the defect sample increment learning module comprises a sample acquisition agent, a feature similarity calculation engine, a local gradient calculation unit and a model parameter compressor; the fabric motion state synchronous compensation module is used for eliminating motion blur and position drift errors, and the hardware of the fabric motion state synchronous compensation module comprises a high-precision rotary encoder, a signal conditioning circuit and a time synchronous controller; The multi-camera collaborative calibration and parallax correction subsystem is used for generating full-width seamless detection images and comprises an offline calibration workstation and an online correction engine.
- 10. The artificial intelligence bunchy yarn fabric vision inspection system of claim 9, wherein the embedded image processing unit is configured to: performing image preprocessing, feature extraction and preliminary defect screening tasks; The hardware platform adopts a multi-core ARM processor and field programmable gate array cooperative architecture, the ARM processor is responsible for running an operating system and a communication protocol stack, and the field programmable gate array realizes hardware acceleration of convolution operation, image scaling and histogram statistics computation intensive operation; the memory is configured as an eight GBDDR four-generation synchronous dynamic random access memory, and the memory bandwidth is not lower than sixty four GB per second; the external two TB solid state disks are used for caching original image data and the intermediate feature map; The network interface supports gigabit Ethernet and industrial real-time Ethernet protocols, ensuring low-latency data interaction with the cloud server.
Description
Artificial intelligence bunchy yarn fabric visual detection method and system Technical Field The invention belongs to the field of artificial intelligence, and particularly relates to an artificial intelligence bunchy yarn fabric visual detection method and system. Background Along with the deep penetration of artificial intelligence technology in the intelligent transformation of textile industry, the automatic visual detection of fabric surface defects has become a core link for improving the production yield and quality control efficiency. Traditional fabric detection relies on manual visual or fixed threshold-based image processing algorithms, and the core principle of the traditional fabric detection is established on the basis of a standard texture model and a static feature extraction rule. However, the slub yarn fabrics exhibit highly non-uniform texture features and localized morphological distortions due to their characteristic irregular knuckle structure and randomly distributed nubs under dynamic conditions of light variation, loom vibration and yarn tension fluctuation. The fixed threshold method is extremely easy to misjudge a real node structure as a defect due to the lack of self-adaptive modeling capability of the node shape, or the false alarm rate and the false alarm rate are high due to the fact that the false alarm is missed due to the hidden broken yarn and hairiness gathering defect caused by the fuzzy transition area between nodes. In addition, the modern textile production line has higher requirements on detection speed, defect classification precision and multi-variety flexible adaptation capability, the rigid algorithm architecture of the traditional visual scheme is difficult to realize millisecond-level recall detection under complex background interference, and the large-scale intelligent manufacturing process of high-end bunchy yarn products is severely restricted. Accordingly, an artificial intelligence driven slub yarn fabric vision inspection method and system is desired. Disclosure of Invention The present application has been made to solve the above-mentioned technical problems. The embodiment of the application provides an artificial intelligence bunchy yarn fabric visual detection method and system, which realize high-precision identification and positioning of minor flaws on the surface of a bunchy yarn fabric under a complex texture background by constructing a cooperative work mechanism of a multi-scale texture feature extraction framework and a dynamic defect discrimination model. According to the system, a high-resolution linear array scanning camera and a programmable light source array are deployed on a physical layer, a local texture gradient response and global fabric structure semantic modeling are fused on an algorithm layer, and a real-time feedback-based self-adaptive imaging parameter adjustment mechanism is established on a control layer, so that full-automatic high-robustness visual detection of typical defects of abnormal bamboo joint morphology, yarn breakage, stain contamination and uneven density in a bamboo joint yarn fabric is completed on the premise of not relying on manual intervention, the detection efficiency and accuracy are remarkably improved, the false detection rate is reduced, and the severe requirement of a high-speed weaving production line on online quality control is met. According to one aspect of the present application, there is provided an artificial intelligence bunchy yarn fabric visual inspection method and a control method mentioned in the system, comprising: The linear array scanning imaging device is arranged right above the fabric conveying path, the surface of the continuously advancing slub yarn fabric is subjected to progressive optical scanning at the sampling frequency of not lower than two thousand lines per second, an original gray level image sequence is obtained, the optical resolution of the linear array scanning imaging device is not lower than twelve pixel points per millimeter, and the sufficient pixel characterization capability is still ensured when the minimum detectable defect size in the slub yarn is not greater than three millimeters per zero point. The programmable multispectral illuminating unit comprises a main illuminating channel, a side-glancing illuminating channel and a backlight compensation channel, wherein the main illuminating channel adopts a diffuse reflection white light source for providing uniform basic illumination, the side-glancing illuminating channel adopts a low-angle oblique light source for enhancing shadow contrast of a microscopic concave-convex structure on the surface of the fabric, the backlight compensation channel adopts a transmission light source for inhibiting interference of a fabric bottom structure on surface defect identification, and driving current of the three-channel light source is subjected to closed-loop adjustment by a central controller according to an ima