Search

CN-120976404-B - Clothing model image generation method and system

CN120976404BCN 120976404 BCN120976404 BCN 120976404BCN-120976404-B

Abstract

The invention relates to the field of image processing, in particular to a method and a system for generating a clothing model image, wherein the method comprises the steps of acquiring a clothing commodity image and extracting different types of features of corresponding clothing according to the clothing commodity image; the method comprises the steps of obtaining fusion features, generating a model set to be selected, further obtaining candidate models, adjusting body type parameters of each candidate model by using a 3D human body posture model, adjusting posture parameters of each candidate model, calculating similarity of multiple types of features between each candidate model and a clothing commodity graph, weighting and summing the similarity of the multiple types of features to obtain comprehensive similarity corresponding to each candidate model, and taking models with the comprehensive similarity larger than a similarity threshold as optimal models. The method can improve the generation efficiency of the clothing model image and reduce the difference between the clothing of the clothing model image and the clothing on the original clothing commodity image.

Inventors

  • CEN DELIAN
  • ZHANG SUOXIN
  • GU XUPENG
  • NIU CAO
  • YANG BO

Assignees

  • 广州钛动科技股份有限公司

Dates

Publication Date
20260508
Application Date
20250617

Claims (7)

  1. 1. The clothing model image generation method is characterized by comprising the steps of acquiring a clothing commodity image, and extracting different types of features corresponding to clothing according to the clothing commodity image, wherein the extracted features comprise color features, material features, size features and style features; Weighting and fusing the extracted different types of features to obtain fusion features; Inputting the initial model set and the fusion characteristics into a generated countermeasure network model to generate a model set to be selected, further calculating loss function values corresponding to the models to be selected, and taking the models to be selected, of which the loss function values are smaller than a loss threshold value, as candidate models; the body type parameters of each candidate model are adjusted by using the 3D human body posture model to match the garment width looseness corresponding to the garment commodity graph, and the posture parameters of each candidate model are adjusted to simulate the garment dynamic effect; Calculating the similarity of a plurality of types of features between each candidate model and the clothing commodity graph, weighting and summing the similarity of each type of features to obtain the comprehensive similarity corresponding to each candidate model; The calculation expression of the similarity threshold value is that if the clothing type corresponding to the clothing commodity graph is loose, the value of the similarity threshold value is 0.8, otherwise, the value of the similarity threshold value is 0.85; the size feature includes a slackening degree threshold, and the calculation expression of the slackening degree threshold R is as follows: in the formula, And Representing the garment area and the body area of the model, respectively; the calculation expression of the comprehensive similarity is as follows: ; wherein Similarity represents the overall Similarity, N represents the total number of feature types, Weights representing the kth type of feature, A feature vector representing a kth type of feature of the article of apparel graph, A feature vector representing a feature of a kth type of the candidate model, Representing norms of the vectors; the method for extracting the dimensional characteristics of the clothing comprises inputting a clothing commodity graph into a YOLOV model so as to obtain a series of prediction boundary boxes, and calculating the intersection ratio of each prediction boundary box and the real clothing boundary; The boundary frame with the largest intersection ratio is selected from all the prediction boundary frames to be used as a high-confidence boundary frame, the pixel level size of the high-confidence boundary frame and the pixel level size of the clothing commodity graph are obtained, the size of the high-confidence boundary frame is standardized, the standard clothing size is obtained, and the calculation expression is as follows: in the formula, Representing the dimensions of a standard garment, Representing the pixel level size of the high confidence bounding box, The pixel-level size of the article of apparel diagram is represented, Representing the reference dimension.
  2. 2. The method of generating a clothing model image of claim 1 wherein the plurality of types of features between the candidate model and the clothing commodity image include color features and size features.
  3. 3. The method of claim 1, wherein extracting the color features of the garment includes extracting color distribution, dominant color, secondary color, cold duty ratio, color semantic features of the garment commodity image, and color compatibility of the garment commodity image with a model background, the color semantic features including "cold color system", "warm color system", and "strike design".
  4. 4. The method for generating a model image of a garment according to claim 1, wherein the method for extracting the texture features of the garment comprises obtaining texture features of the commodity image of the garment using a gray level co-occurrence matrix, obtaining the texture type of the commodity image of the garment using a VGG-16 model, and using the texture features and the texture type as the texture features of the commodity image of the garment.
  5. 5. The method of claim 1, wherein extracting the pattern features of the garment comprises inputting a garment merchandise map into a preset Mask R-CNN model to obtain attributes of each detected garment part, the garment parts comprising one or more of a collar, sleeves, buttons and zippers, wherein the attributes of the collar comprise collar shape, the attributes of the sleeves comprise sleeve length type, and the attributes of the buttons and zippers comprise closed or not closed.
  6. 6. The method for generating a model image according to any one of claims 1 to 5, wherein the types of the extracted features further include complete matching information and clothing style labels corresponding to the clothing commodity image.
  7. 7. A clothing model image generation system comprising a memory and a processor, the memory storing computer program instructions, wherein the program instructions when executed by the processor implement the clothing model image generation method of any one of claims 1-6.

Description

Clothing model image generation method and system Technical Field The invention relates to the technical field of image processing. More particularly, the invention relates to a method and a system for generating a clothing model image. Background The apparel commodity image is an image that is specifically used to display and sell apparel products. These images are used to reveal key elements of the apparel product. Through the high quality image, the customer can clearly see the style, color, material and details of the garment, thereby making a purchase decision. When a merchant puts up clothing commodity on an electronic commerce platform such as a Taobao platform and a Jian duo platform, the merchant is usually required to upload a clothing commodity diagram. The traditional clothing commodity graph manufacturing method is to find a real model to try on clothing and take a picture so as to obtain the clothing commodity graph, but the method is high in cost. In addition, the method can also utilize model templates in a model image library to acquire the clothing model image by combining an image synthesis technology, the method needs to acquire the figure dressing image firstly, recall the model templates with similar postures with human bodies in the figure dressing image from the model image library, then determine deformation characteristic parameters of an original clothing image in the figure dressing image relative to the body image, transform the original clothing image according to the deformation characteristic parameters to acquire a clothing transformation image, and finally synthesize the clothing transformation image into the body image of the model template to acquire the model dressing image. In addition, the method needs to utilize deformation characteristic parameters to transform the original clothing image, changes the characteristics of the original clothing in terms of shape, size and the like, and causes great difference between the clothing in the generated clothing model image and the original clothing. Disclosure of Invention In order to solve the technical problems that in the prior art, the method for generating the clothing model image has lower efficiency for generating the clothing model and the generated clothing model image has larger difference with the clothing on the original clothing commodity image, the invention provides the scheme in the following aspects. In a first aspect, the invention provides a method for generating a clothing model image, which comprises the steps of acquiring a clothing commodity diagram, and extracting different types of features of corresponding clothing according to the clothing commodity diagram, wherein the extracted features comprise color features, material features, size features and style features; Weighting and fusing the extracted different types of features to obtain fusion features; Inputting the initial model set and the fusion characteristics into a generated countermeasure network model to generate a model set to be selected, further calculating loss function values corresponding to the models to be selected, and taking the models to be selected, of which the loss function values are smaller than a loss threshold value, as candidate models; the body type parameters of each candidate model are adjusted by using the 3D human body posture model to match the garment width looseness corresponding to the garment commodity graph, and the posture parameters of each candidate model are adjusted to simulate the garment dynamic effect; and respectively calculating the similarity of a plurality of types of features between each candidate model and the clothing commodity graph, carrying out weighted summation on the similarity of each type of features so as to obtain the comprehensive similarity corresponding to each candidate model, and taking the model with the comprehensive similarity larger than the similarity threshold as the optimal model. Preferably, the calculation expression of the integrated similarity is: ; wherein Similarity represents the overall Similarity, N represents the total number of feature types, Weights representing the kth type of feature,A feature vector representing a kth type of feature of the article of apparel graph,A feature vector representing a feature of a kth type of the candidate model,Representing the norm of the vector. Preferably, the plurality of types of features between the candidate model and the article of apparel map include color features and size features. Preferably, if the clothing type corresponding to the clothing commodity diagram is loose, the value of the similarity threshold is 0.8, otherwise, the value of the similarity threshold is 0.85. Preferably, extracting the color features of the garment includes extracting color distribution, dominant color, secondary color, cold color duty ratio, color semantic features of the garment commodity image, and color compatibility of the garment co