Search

CN-121659394-B - Clothing pattern automatic generation method and system based on user interaction

CN121659394BCN 121659394 BCN121659394 BCN 121659394BCN-121659394-B

Abstract

The invention relates to the technical field of clothing design graph generation, and discloses a clothing pattern automatic generation method and system based on user interaction, wherein the method comprises the steps of acquiring cross-platform global tide pattern data in real time; receiving a user text keyword and generating a final condition embedded vector, outputting a two-dimensional cut piece set by a multi-mode pattern generation model, carrying out affine mapping optimization and chromatic aberration minimization treatment on the two-dimensional cut piece set, and generating a final complete garment pattern. Compared with the technical problems that in the prior art, the aesthetic preference of a target market and the adaptation requirement of the clothing profile cannot be met at the same time under the cross-cultural trend condition, and pattern dislocation and seam texture interruption are caused, the method and the device realize the automatic generation of the high-consistency clothing pattern under different cultural aesthetic and various clothing edition conditions through the cooperative flow of trend weight calibration, multi-modal condition embedding and seam continuity constraint optimization, and promote the automation level of clothing pattern design.

Inventors

  • HE ZHI
  • ZHENG ZEYU
  • YANG BOBO
  • WEN MIAOMIAO

Assignees

  • 杭州知衣科技有限公司

Dates

Publication Date
20260508
Application Date
20260205

Claims (7)

  1. 1. An automatic clothing pattern generation method based on user interaction is characterized by comprising the following steps: Step S10, acquiring a clothing pattern data set, executing a design element identification task according to the clothing pattern data set, extracting a design category element i, acquiring a target cultural area c, adopting a time sequence sensitive cross-platform trend weight calibration mechanism for the design category element i and the target cultural area c to process, and outputting a design category element weight set E, wherein the step of adopting the time sequence sensitive cross-platform trend weight calibration mechanism for the design category element i and the target cultural area c to process, and outputting the design category element weight set E specifically comprises the following steps: Counting the occurrence frequency of the design category element i, and outputting a platform frequency set; Acquiring the proportion of active users and the proportion of clothing subjects, determining a platform influence coefficient based on the proportion of active users and the proportion of clothing subjects, and carrying out weighting treatment on a platform frequency set by utilizing the platform influence coefficient to obtain a fused cross-platform weighting frequency result; introducing a time attenuation factor, performing time correction processing on the fused cross-platform weighted frequency result according to the time attenuation factor, and outputting a time corrected weighted frequency result; calculating the design category element i in a preset time window based on the time-corrected weighted frequency result The frequency change rate in the design class element i, when the frequency change rate exceeds a set trend increase threshold value, multiplying the weighted frequency result of the design class element i by a preset trend amplification coefficient Then, normalizing the frequency results of all the elements to output a design category element weight set E; Step S20, receiving user interaction input, and sequentially executing text processing, image processing, design category element mapping processing and fusion processing based on the user interaction input and the design category element weight set E, and outputting a final conditional embedding vector F; step S30, inputting the final condition embedded vector F into a pre-trained multi-mode clothing pattern generation model, wherein the multi-mode clothing pattern generation model outputs a two-dimensional cut piece set of the target clothing profile The multi-modal clothing pattern generation model comprises an input layer, a characteristic expansion layer, a cross-modal attention fusion layer, a residual convolution generation layer, a style modulation decoding layer, an output layer and a two-dimensional cutting piece collection, wherein the input layer is used for receiving a final condition embedded vector F, the characteristic expansion layer is used for carrying out vector expansion and multi-scale decomposition on the final condition embedded vector F in a preset space style mixed characteristic space, the cross-modal attention fusion layer is used for carrying out cross-channel attention calculation between text, images and style base vector characteristics so as to strengthen key information, the residual convolution generation layer is used for generating a high-resolution pattern characteristic image while keeping global style consistency, the style modulation decoding layer is used for carrying out space transformation and cutting piece adaptation on the high-resolution pattern characteristic image according to cutting piece contour information of a target clothing contour, and the output layer is used for outputting a two-dimensional cutting piece collection of the target clothing contour ; Step S40, based on two-dimensional cut-parts collection Performing cross-cut piece pattern splicing and chromatic aberration minimization by adopting a seam continuity constraint multi-scale affine mapping optimization method, and outputting an optimized cut piece pattern mapping set Wherein, based on two-dimensional cut-parts collection Performing cross-cut piece pattern splicing and chromatic aberration minimization by adopting a seam continuity constraint multi-scale affine mapping optimization method, and outputting an optimized cut piece pattern mapping set Specifically comprises the following steps: according to two-dimensional cut-parts collection Establishing an initial mapping relation set of clipping and patterns with corresponding high-resolution pattern feature maps And collect the initial mapping relation As input for subsequent processing; Based on initial mapping relation set Extracting color gradient features and texture direction features of adjacent cut pieces at a shared joint, constructing a joint continuity constraint set C, and combining the joint continuity constraint set C with an initial mapping relation set Together as input for affine parameter optimization; For initial mapping relation set under global scale Performing affine parameter optimization to minimize the target value of the seam succession constraint C, and performing local non-rigid fine tuning at the shared seam to obtain an updated mapping relation set ; For updated mapping relation set Performing color difference minimization and color consistency correction at the shared seam, outputting an optimized cut segment pattern mapping set ; Step S50, mapping the output cut piece pattern to a set Inputting the final complete garment pattern into a preset garment virtual assembly network, and outputting the final complete garment pattern by the garment virtual assembly network.
  2. 2. The method according to claim 1, wherein in step S10, the clothing pattern data set includes high frequency tag patterns from PINTEREST platforms, popular patterns from e-commerce sales data, and fashion picture data from social media, and the design category elements include floral symbols, geometric symbols, animal symbols, and traditional symbols.
  3. 3. The method for automatically generating a clothing pattern based on user interaction according to claim 1, wherein in step S20, receiving a user interaction input, sequentially performing text processing, image processing, design category element mapping processing and fusion processing based on the user interaction input and the design category element weight set E, and outputting a final conditional embedding vector F, specifically comprising: Receiving users in target cultural area c User interactive input of (a), the user interactive input comprising text keywords Reference image Instructions for interacting with multiple rounds of conversations to refine ; Text processing, namely, text keywords are processed by utilizing a semantic parsing method based on bi-directional encoder representation Instructions for interacting with multiple rounds of conversations to refine Coding to obtain text feature vectors; image processing, namely, utilizing a global color matching quantization method based on perceptual hash to perform image processing on a reference image Performing color texture analysis, and extracting a reference image by adopting a local structure extraction method based on morphological gradient and edge spectrum Performing shape analysis, and fusing the results of the color texture analysis and the shape analysis to obtain an image feature vector; Mapping the design category elements into corresponding style base vectors by adopting a principal component feature reconstruction method, and constructing a target cultural style vector according to a design category element weight set E and the corresponding style base vectors; And (3) fusion processing, namely constructing and obtaining a final conditional embedded vector F by adopting a self-adaptive attention weighted fusion mode based on the text feature vector, the image feature vector and the target cultural style vector.
  4. 4. The method for automatically generating a garment pattern based on user interaction according to claim 1, wherein in step S50, the preset garment virtual assembly network comprises a cut-segment grid mapping unit, and the cut-segment grid mapping unit adopts a UV mapping method based on vertex-by-vertex texture coordinate reconstruction for mapping the output optimized cut-segment pattern into a set The system comprises a target clothing profile grid surface, a seam continuity detection unit, a local deformation adjustment unit, a surface rendering unit and a visual model, wherein the target clothing profile grid surface is used for ensuring that the proportion of textures is consistent and stretching deformation is avoided in the mapping process, the seam continuity detection unit is used for simultaneously detecting geometric position continuity and color gradient consistency of a seam neighborhood in the grid surface to generate a seam continuity deviation index set, the local deformation adjustment unit is used for performing cooperative fine adjustment on the positions of grid nodes and corresponding texture coordinates at a shared seam joint when the seam continuity deviation index in the seam continuity deviation index set exceeds a preset deviation index threshold value by adopting a constraint optimization-based grid node and texture synchronization adjustment method to generate a visual model of a complete clothing pattern after deformation adjustment of the local deformation adjustment unit is completed by adopting a boundary neighborhood feature matching dual-channel detection method.
  5. 5. An automatic clothing pattern generation system based on user interaction, which is applied to the automatic clothing pattern generation method based on user interaction according to any one of claims 1 to 4, wherein the automatic clothing pattern generation system based on user interaction comprises: The tide pattern data processing module is used for acquiring a clothing pattern data set, executing a design element identification task according to the clothing pattern data set, extracting a design category element i, acquiring a target cultural area c, adopting a time sequence sensitive cross-platform trend weight calibration mechanism for the design category element i and the target cultural area c to process, and outputting a design category element weight set E, wherein the step of adopting the time sequence sensitive cross-platform trend weight calibration mechanism for the design category element i and the target cultural area c to process, and outputting the design category element weight set E specifically comprises the following steps: Counting the occurrence frequency of the design category element i, and outputting a platform frequency set; Acquiring the proportion of active users and the proportion of clothing subjects, determining a platform influence coefficient based on the proportion of active users and the proportion of clothing subjects, and carrying out weighting treatment on a platform frequency set by utilizing the platform influence coefficient to obtain a fused cross-platform weighting frequency result; introducing a time attenuation factor, performing time correction processing on the fused cross-platform weighted frequency result according to the time attenuation factor, and outputting a time corrected weighted frequency result; calculating the design category element i in a preset time window based on the time-corrected weighted frequency result The frequency change rate in the design class element i, when the frequency change rate exceeds a set trend increase threshold value, multiplying the weighted frequency result of the design class element i by a preset trend amplification coefficient Then, normalizing the frequency results of all the elements to output a design category element weight set E; the multi-mode feature construction module is used for receiving the users in the target cultural area c Based on the user interaction input and the design category element weight set E, sequentially executing text processing, image processing, design category element mapping processing and fusion processing, and outputting a final condition embedded vector F; a clothing pattern generation module for inputting the final condition embedding vector F into a pre-trained multi-modal clothing pattern generation model, which outputs a two-dimensional cut-piece set of the target clothing profile The multi-modal clothing pattern generation model comprises an input layer, a characteristic expansion layer, a cross-modal attention fusion layer, a residual convolution generation layer, a style modulation decoding layer, an output layer and a two-dimensional cutting piece collection, wherein the input layer is used for receiving a final condition embedded vector F, the characteristic expansion layer is used for carrying out vector expansion and multi-scale decomposition on the final condition embedded vector F in a preset space style mixed characteristic space, the cross-modal attention fusion layer is used for carrying out cross-channel attention calculation between text, images and style base vector characteristics so as to strengthen key information, the residual convolution generation layer is used for generating a high-resolution pattern characteristic image while keeping global style consistency, the style modulation decoding layer is used for carrying out space transformation and cutting piece adaptation on the high-resolution pattern characteristic image according to cutting piece contour information of a target clothing contour, and the output layer is used for outputting a two-dimensional cutting piece collection of the target clothing contour ; The cut-parts pattern optimizing module is used for being based on two-dimensional cut-parts collection Performing cross-cut piece pattern splicing and chromatic aberration minimization by adopting a seam continuity constraint multi-scale affine mapping optimization method, and outputting an optimized cut piece pattern mapping set Wherein, based on two-dimensional cut-parts collection Performing cross-cut piece pattern splicing and chromatic aberration minimization by adopting a seam continuity constraint multi-scale affine mapping optimization method, and outputting an optimized cut piece pattern mapping set Specifically comprises the following steps: according to two-dimensional cut-parts collection Establishing an initial mapping relation set of clipping and patterns with corresponding high-resolution pattern feature maps And collect the initial mapping relation As input for subsequent processing; Based on initial mapping relation set Extracting color gradient features and texture direction features of adjacent cut pieces at a shared joint, constructing a joint continuity constraint set C, and combining the joint continuity constraint set C with an initial mapping relation set Together as input for affine parameter optimization; For initial mapping relation set under global scale Performing affine parameter optimization to minimize the target value of the seam succession constraint C, and performing local non-rigid fine tuning at the shared seam to obtain an updated mapping relation set ; For updated mapping relation set Performing color difference minimization and color consistency correction at the shared seam, outputting an optimized cut segment pattern mapping set ; A ready-made clothes assembling module for mapping and collecting the outputted cut piece patterns Inputting the final complete garment pattern into a preset garment virtual assembly network.
  6. 6. An automatic clothing pattern generating device based on user interaction, which is characterized in that the automatic clothing pattern generating device based on user interaction comprises a memory, a processor and an automatic clothing pattern generating program based on user interaction, wherein the automatic clothing pattern generating program based on user interaction is stored on the memory and can be run on the processor, and the automatic clothing pattern generating method based on user interaction is realized according to any one of claims 1 to 4 when the automatic clothing pattern generating program based on user interaction is executed by the processor.
  7. 7. A computer program product, characterized in that it comprises a user-interaction-based garment pattern automatic generation program which, when executed by a processor, implements a user-interaction-based garment pattern automatic generation method according to any one of claims 1 to 4.

Description

Clothing pattern automatic generation method and system based on user interaction Technical Field The invention relates to the technical field of clothing design graph generation, in particular to a clothing pattern automatic generation method and system based on user interaction. Background At present, the design and production of clothing patterns mostly depend on the combination of artificial creatives and static material libraries, and although deep learning and generating models (such as GAN and Diffusion Model) have more application in the field of image generation in recent years, the design and production of clothing patterns still have obvious defects in the aspects of dynamic adaptation of cross-cultural trends, multi-modal interactive understanding, intelligent layout of clothing profiles, seam continuity control and the like. For example, in the trend analysis link, the existing scheme often only depends on a single data source (such as a sales list of a certain electronic commerce platform or a popular label of a single social platform) to perform popularity statistics, lacks a cross-platform trend fusion and time attenuation mechanism, and cannot capture dynamic characteristics of time-varying design element weights under different cultural backgrounds. Particularly in globalization markets, preference differences of colors, patterns, symbols, typesetting and the like of different areas are obvious, and the traditional method is difficult to accurately reflect aesthetic weights of target cultural areas in a generation stage. In the user intention analysis link, most of the existing systems only support single-round text description input, lack multi-mode fusion capability, cannot process multi-source information such as keywords, reference images, history interaction records and the like at the same time, and cannot perform semantic disambiguation and weight correction based on a target cultural feature word stock, so that style deviation, cultural misuse and even symbol tabu conflict are easy to occur in a generated result. In the pattern generation link, the existing depth generation model can output high-quality pattern samples, but the condition control granularity is insufficient, the layer-by-layer characteristic generation strategy based on style modulation and the seamless tiling constraint are lacked, and the generated pattern cannot always keep the consistent texture direction and continuous joint after cutting and splicing. For ready-made clothes which need to be mapped to different profiles (such as an A-shaped skirt, a hat-connected sweater, a Oversize shirt and the like), most of the existing methods adopt unified scaling or simple cutting, local non-rigid deformation and seam trend constraint are ignored, and the problems of pattern dislocation, uneven color difference, local stretching and the like are caused after the ready-made clothes are assembled. Therefore, a method for automatically generating clothing patterns based on user interaction is needed to automatically generate clothing patterns in multi-mode conditions facing to object cultural aesthetic, and keep seams continuously consistent with colors, so as to improve matching degree of the clothing patterns and market demands, adaptation consistency of finished clothing patterns and automation level of clothing pattern design. Disclosure of Invention Aiming at the technical defects, the invention aims to provide an automatic clothing pattern generation method based on user interaction, and aims to solve the technical problems that pattern dislocation and seam texture interruption are caused by the fact that the aesthetic preference of a target market and the clothing profile adaptation requirement cannot be met under the cross-cultural trend condition in the prior art. In order to solve the technical problems, the invention adopts the following technical scheme that the invention provides an automatic clothing pattern generation method based on user interaction. The clothing pattern automatic generation method based on user interaction comprises the following steps: Step 10, acquiring a clothing pattern data set, executing a design element identification task according to the clothing pattern data set, and extracting to obtain a design category element i; Step S20, receiving user interaction input, and sequentially executing text processing, image processing, design category element mapping processing and fusion processing based on the user interaction input and the design category element weight set E, and outputting a final conditional embedding vector F; step S30, inputting the final condition embedded vector F into a pre-trained multi-mode clothing pattern generation model, wherein the multi-mode clothing pattern generation model outputs a two-dimensional cut piece set of the target clothing profile ; Step S40, based on two-dimensional cut-parts collectionPerforming cross-cut piece pattern splicing and chromatic abe