Search

US-20260127781-A1 - LEVERAGING GENERATIVE MODELS FOR EFFICIENT DESIGN AND MANUFACTURING OF PRODUCTS

US20260127781A1US 20260127781 A1US20260127781 A1US 20260127781A1US-20260127781-A1

Abstract

A system and method are presented for generating a design based on a prompt. The prompt can be a text prompt or an image prompt. An image generating model is trained on a database of images. The image generating model includes at least one hyperparameter. The prompt is submitted to the model along with the hyperparameter. The model generates a set of images, one of which is selected for further processing. The processed image is converted into a production specification which can be used as a direct input to control a manufacturing process of the product.

Inventors

  • Julian Cole

Assignees

  • SHAW INDUSTRIES GROUP, INC.

Dates

Publication Date
20260507
Application Date
20251024

Claims (20)

  1. 1 . A computer-implemented method comprising: receiving a request to generate an image corresponding to a design of a to-be-manufactured product, wherein the request comprises a prompt indicating attributes of the image; generating, using a generative machine learning model and using the prompt as an input, a plurality of images; selecting an output image among the plurality of images as the design of the to-be-manufactured product; mapping aspects of the output image to a production specification corresponding to the to-be-manufactured product; and generating, by a manufacturing tool, the to-be-manufactured product using the production specification.
  2. 2 . The method of claim 1 , wherein generating the plurality of images and selecting the output image comprises: submitting the prompt to a first generative machine learning model; generating, using the first generative machine learning model and using the prompt as an input, a first plurality of images; selecting a first image among the first plurality of images; generating, using a second generative machine learning model, a second plurality of images based on the first image; and selecting the output image from the second plurality of images.
  3. 3 . The method of claim 2 , wherein the first and second generative machine learning models comprise diffusion models trained using a plurality of training images and wherein the plurality of training images comprises product images from a product catalog and public images corresponding to publicly accessible images.
  4. 4 . The method of claim 2 , wherein the request comprises a first hyperparameter of the first or second generative machine learning model, and wherein the first hyperparameter is submitted to the first or second generative machine learning model along with the prompt to generate the first or second plurality of images.
  5. 5 . The method of claim 4 , wherein the first hyperparameter comprises at least one of a number of steps for a sampling method or a creativity level, wherein the creativity level corresponds to a distance metric between an encoding of the prompt and an image encoding of each image of the first or second plurality of images.
  6. 6 . The method of claim 4 , wherein generating the second plurality of images comprises: determining a second hyperparameter for the second generative machine learning model; submitting the first image and the second hyperparameter to the second generative machine learning model; and generating, by the second generative machine learning model and based on the first image and the second hyperparameter, the second plurality of images.
  7. 7 . The method of claim 6 , wherein the second hyperparameter comprises a noise level, and wherein the noise level determines an amount of noise added to the first image by the second generative machine learning model when generating the second plurality of images.
  8. 8 . The method of claim 1 , wherein the prompt comprises at least one of a text prompt or an image prompt.
  9. 9 . The method of claim 1 , wherein the to-be-manufactured product is a carpet tile and the production specification comprises data for attributes of the carpet tile including, for each pixel of the output image, at least one of a yarn type, a yarn color, a pile type, and a pile height.
  10. 10 . The method of claim 9 , wherein the production specification further comprises a tufting tool, a cutting tool, a cut list, a tile shape, a tile dimension, and an overall dimension.
  11. 11 . A system for producing a production specification and a to-be-manufactured product comprising: a production server; a manufacturing control server; and an image generator; wherein the image generator is configured to: receive a request comprising a prompt indicating attributes of an output image; generate, using a generative machine learning model with the prompt as an input, a plurality of images; select the output image from the plurality of images; wherein the production server is configured to: map aspects of the output image to the production specification corresponding to the to-be-manufactured product; and wherein the manufacturing control server is configured to: generate, by one or more manufacturing tools, the to-be-manufactured product using the production specification.
  12. 12 . The system of claim 11 , wherein the image generator is further configured to: submit the prompt to a first generative machine learning model; generate, using the first generative machine learning model and using the prompt as an input, a first plurality of images; select a first image among the first plurality of images; generate, using a second generative machine learning model, a second plurality of images based on the first image; and select the output image from the second plurality of images.
  13. 13 . The system of claim 12 , wherein the first and second generative machine learning models comprise diffusion models trained using a plurality of training images and wherein the plurality of training images comprises product images from a product catalog and public images corresponding to publicly accessible images.
  14. 14 . The system of claim 12 , wherein the request comprises a first hyperparameter of the first or second generative machine learning model, and wherein the first hyperparameter is submitted to the first or second generative machine learning model along with the prompt to generate the first or second plurality of images.
  15. 15 . The system of claim 14 , wherein the first hyperparameter comprises at least one of a number of steps for a sampling method or a creativity level, wherein the creativity level corresponds to a distance metric between an encoding of the prompt and an image encoding of each image of the first or second plurality of images.
  16. 16 . The system of claim 14 , wherein the image generator is further configured to: determine a second hyperparameter for the second generative machine learning model; submit the first image and the second hyperparameter to the second generative machine learning model; and generate, by the second generative machine learning model and based on the first image and the second hyperparameter, the second plurality of images.
  17. 17 . The system of claim 16 , wherein the second hyperparameter comprises a noise level, and wherein the noise level determines an amount of noise added to the first image by the second generative machine learning model when generating the second plurality of images.
  18. 18 . The system of claim 11 , wherein the prompt comprises at least one of a text prompt or an image prompt.
  19. 19 . The system of claim 11 , wherein the to-be-manufactured product is a carpet tile and the production specification comprises data for attributes of the carpet tile including, for each pixel of the output image, at least one of a yarn type, a yarn color, a pile type, and a pile height.
  20. 20 . The system of claim 19 , wherein the production specification further comprises a tufting tool, a cutting tool, a cut list, a tile shape, a tile dimension, and an overall dimension.

Description

CROSS-REFERENCE TO RELATED APPLICATIONS This application claims the benefit of priority to U.S. Provisional Application No. 63/715,869, filed on November 4, 2024, the contents of which are hereby incorporated by reference. BACKGROUND Designing a manufactured product can be time and resource intensive because the design process generally calls for expertise and often multiple iterations between the designer and the customer. Typically a designer generates multiple designs, which are then evaluated by the customer who provides feedback that the designer uses to update the design. This cycle consumes many hours of work and also may require special expertise or resources. Even after the appearance of the design has been agreed upon, the design may not easily correlate with what a manufacturing tool can generate. Thus, additional re-designs may be required to accommodate both the customer and the available manufacturing tools. This entire process thus consumes significant time and resources before a manufacturable design can be submitted to a manufacturing line for production. SUMMARY This disclosure is directed to systems and methods for enabling resource efficient generation of product designs that can be readily manufactured using manufacturing machines. This disclosure uses flooring products as an example of the products that can be designed and manufactured using the systems and methods described herein. The present disclosure is not limited to flooring products and one skilled in the art will appreciate that the same techniques can be leveraged in the design and manufacture of other types of products (e.g., furniture, wallpaper, vehicles, appliances, clothing, etc.). A method is described herein for generating an image, a production specification, and a to-be-manufactured product. The method includes receiving a request to generate an image corresponding to a design of a to-be-manufactured product. The request includes a prompt indicating attributes of the image. The method includes generating, using a generative machine learning model and using the prompt as an input, a plurality of images. The method includes selecting an output image among the plurality of images as the design of the to-be-manufactured product. The method includes mapping aspects of the output image to a production specification corresponding to the to-be-manufactured product. The method includes generating, by a manufacturing tool, the to-be-manufactured product using the production specification. The generation of a plurality of images and selecting the output image can include submitting the prompt to a first generative machine learning model and generating, using the first generative machine learning model and using the prompt as an input, a first plurality of images. The method can include selecting a first image among the first plurality of images. The method can include generating, using a second generative machine learning model, a second plurality of images based on the first image and selecting the output image from the second plurality of images. The first and second generative machine learning models can include diffusion models trained using a plurality of training images and wherein the plurality of training images comprises product images from a product catalog and public images corresponding to publicly accessible images. The request can include a first hyperparameter of the first or second generative machine learning model, and wherein the first hyperparameter is submitted to the first or second generative machine learning model along with the prompt to generate the first or second plurality of images. The first hyperparameter can include at least one of a number of steps for a sampling method or a creativity level. The creativity level corresponds to a distance metric between an encoding of the prompt and an image encoding of each image of the first or second plurality of images. The method can include determining a second hyperparameter for the second generative machine learning model, submitting the first image and the second hyperparameter to the second generative machine learning model, and generating, by the second generative machine learning model and based on the first image and the second hyperparameter, the second plurality of images. The second hyperparameter can include a noise level, and wherein the noise level determines an amount of noise added to the first image by the second generative machine learning model when generating the second plurality of images. The prompt can include at least one of a text prompt or an image prompt. The to-be-manufactured product can be a carpet tile and the production specification can include data for attributes of the carpet tile including, for each pixel of the output image, at least one of a yarn type, a yarn color, a pile type, and a pile height. The production specification can further include a tufting tool, a cutting tool, a cut list, a tile shape, a tile dimension,