US-20260127731-A1 - SYSTEMS AND METHODS FOR MPS-GAN: A MULTI-CONDITIONAL GENERATIVE ADVERSARIAL NETWORK FOR SIMULATING INPUT PARAMETERS' IMPACT ON MANUFACTURING PROCESSES
Abstract
A system for assessing and simulating the impact of processing parameters on the final quality of a manufacturing product is disclosed. The system includes a multi-parameter simulation generative adversarial network that uses a generator module and a discriminator module. The generator module is modified such that it can synthesize realistic images from multiple processing parameters and the discriminator module is modified such that it can evaluate the synthesized and training images for their realness and predict the multiple processing parameters used for the images. In order to allow for the production of high-resolution images the system can also use a judge module to assess the perceptual quality of the synthesized images.
Inventors
- Shenghan Guo
- Hasnaa Ouidadi
Assignees
- Shenghan Guo
- Hasnaa Ouidadi
Dates
- Publication Date
- 20260507
- Application Date
- 20251107
Claims (20)
- 1 . A method of assessing an impact of processing parameters using a multi-conditional generative adversarial network model, comprising: providing a generative adversarial network (GAN) comprising a generator and a discriminator, the GAN being configured to generate images representative of a manufacturing process or a manufactured product conditioned on a plurality of processing parameters; accessing, as inputs to the GAN, a latent vector and a plurality of class labels, each class label corresponding to a respective processing parameter of the plurality of processing parameters; converting each class label to a respective embedding representation to generate a plurality of embeddings corresponding to the plurality of class labels; synthesizing, by the generator, a set of images including: (i) generating an image by: reshaping the plurality of embedding representations and the latent vector to obtain reshaped embeddings and a reshaped latent representation; concatenating the reshaped embeddings with the reshaped latent representation; and processing the concatenation through one or more neural network layers configured to progressively increase image dimensionality and generate the image; (ii) outputting, by providing the generated image to the discriminator, (a) a first output indicating whether the input image is more likely to correspond to a real image obtained from a reference dataset or an image generated by the generator, and (b) a plurality of additional outputs, each corresponding to a different processing parameter of the plurality of processing parameters, each additional output representing an estimated likelihood for each possible value of the respective parameter, wherein a value associated with a highest likelihood is identified as the discriminator's predicted class for that parameter, such that the discriminator simultaneously assesses authenticity of the image and a combination of manufacturing conditions represented by the processing parameters in the image, (iii) repeating steps (i)-(ii) for a plurality of iterations to generate synthesized images corresponding to candidate combinations of the processing parameters; and identifying, based on the synthesized images, an optimal combination of the plurality of processing parameters by evaluating one or more quality measures associated with the generated images indicative of a desired manufacturing outcome.
- 2 . The method of claim 1 , wherein each class label is treated as a discrete variable with a finite set of possible values, and the discriminator is configured as a multi-class classifier that outputs, in addition to an assessment as to authenticity, separate likelihood distributions for the class values of the respective processing parameters.
- 3 . The method of claim 1 , wherein prior to image generation each class label is converted to an embedding representation and reshaped for concatenation with a reshaped latent vector for input to the generator.
- 4 . The method of claim 1 , wherein a reference real image is chosen by reading the parameter values used for the generated image, finding real images in a dataset that have the same parameter values, and selecting one or more of those real images for comparison.
- 5 . The method of claim 1 , wherein the synthesized images comprise at least one of thermal images representing resistance spot welding or X-ray computed tomography (XCT) images representing additively manufactured specimens, the images being constrained by predefined combinations of processing parameters.
- 6 . The method of claim 1 , wherein the discriminator's additional outputs for the processing parameters are used to determine the predicted class value for each parameter by selecting the value associated with the highest estimated likelihood.
- 7 . The method of claim 1 , wherein the GAN is a variation of the AC-GAN model that can generate images conditioned on multiple conditions that integrates an auxiliary classifier to discriminate between the class labels of the real and generated images instead of providing the class labels directly to the discriminator.
- 8 . The method of claim 1 , wherein evaluating the generated images to identify the optimal combination of processing parameters includes using domain-specific quality measures.
- 9 . The method of claim 1 , wherein the plurality of embeddings convert each class label of the plurality of class labels to an array of trainable parameters.
- 10 . The method of claim 1 , wherein the discriminator of the GAN is modeled as a multi-class convolutional neural network.
- 11 . A system for assessing the impact of processing parameters on a manufacturing product, comprising: a memory; and a processor having access to a set of executable instructions located on the memory which, when executed, cause the processor to activate a multi-parameter simulation generative adversarial network, the multi-parameter simulation generative adversarial network comprising: a generator module including an array of trainable parameters, wherein the generator module is operable to: receive a plurality of input parameters and latent vectors, wherein each input parameter of the plurality of input parameters corresponds to a specific processing parameter for a manufacturing product; and synthesize images of the manufacturing product based on the plurality of input parameters and latent vectors; wherein the generator module synthesizes the images based on a discriminator feedback without direct access to real training image data; and a discriminator module operable to assess the images synthesized by the generator module and return a predicted value for each input parameter of the plurality of input parameters and a determination of realness of the synthesized images, wherein the discriminator module is trained using real images; wherein the array of trainable parameters of the generator module are updated based on the determination of realness and assessment of input parameters by the discriminator module.
- 12 . The system of claim 11 , wherein the plurality of input parameters are discrete values or labels.
- 13 . The system of claim 11 , wherein the multi-parameter simulation generative adversarial network further comprises: a judge module configured to assess the perceptual quality of the images synthesized by the generator module by: comparing the synthesized images to real images of a product manufactured under the same parameters as the plurality of input parameters; and determining a content loss of the synthesized images; wherein the array of trainable parameters of the generator module are updated according to a combination of the content loss obtained based on the judge module's assessment of the perceptual quality of the synthesized images and an adversarial and auxiliary loss obtained based on the determination of realness and assessment of input parameters by the discriminator module.
- 14 . The system of claim 13 wherein a weighting factor is applied to the content loss.
- 15 . The system of claim 14 , wherein the content loss weighting factor is gradually increased as the multi-parameter simulation generative adversarial network is trained.
- 16 . A computer-implemented method for improved-resolution, multi-conditional image generation to identify optimal build parameters, comprising: providing a generative adversarial network conditioned on multiple processing parameters and configured to generate images of at least 256×256 pixels from a latent space; for each generated image, selecting a reference real image that has the same combination of processing-parameter values as the generated image; obtaining feature representations of the generated image and of the selected reference real image using a pre-trained external feature-extraction model that is separate from the discriminator; computing a content-based difference between the feature representations and combining that difference with adversarial and auxiliary-classification terms in a combined loss; and adjusting a weighting factor applied to the content-based difference during training, including using a smaller weight at early stages to learn the probabilistic distribution of the training images under the processing-parameter conditions and increasing the weight thereafter to improve perceptual quality and resolution of the generated images.
- 17 . The method of claim 16 , wherein the pre-trained external feature-extraction model is a VGG-19 network, and the content-based difference is computed as a mean-squared error between feature maps of the generated image and the selected reference real image.
- 18 . The method of claim 16 , wherein selecting the reference real image includes executing a search that retrieves images in the dataset whose labels exactly match the processing-parameter values used to generate the image, thereby ensuring that both images correspond to the same parameter combination for feature comparison.
- 19 . The method of claim 16 , wherein the weighting factor is held at a small value for an initial set of training epochs and then increased by at least an order of magnitude for later epochs to emphasize perceptual quality.
- 20 . The method of claim 16 , wherein the images comprise 256×256 X-ray computed tomography images that reflect defect morphology in additively manufactured specimens and are conditioned on scan speed and hatch spacing.
Description
CROSS REFERENCE TO RELATED APPLICATIONS The present document is a Non-Provisional patent application that claims benefit to U.S. Provisional Patent Application Ser. No. 63/717,768 filed Nov. 7, 2024, which is herein incorporated by reference in its entirety. FIELD The present disclosure generally relates to generative adversarial networks, and more specifically to the use of a multi-parameter simulation generative adversarial network to simulate the effect of processing parameters on a manufactured product. BACKGROUND Quality is a crucial criterion in manufacturing that ensures the adherence of built products to the specifications set by the designer and their proper functioning when put into operation. An important factor influencing products' quality is the combination of process (or build) parameters used during the manufacturing process. The choice of these build conditions is crucial to the success or failure of manufacturing a particular product and thus needs to be planned both thoughtfully and efficiently. However, identifying the optimal build parameters requires considerate trial-and-error experiments, which is associated with high labor and material costs. One practical way to control and optimize experimental testing will be to adopt statistical analysis such as design of experiments (DOE). DOE enables the quantification of the inputs-output relationship and helps investigate the potential effect of multiple input factors (i.e., build parameters) on a particular outcome (i.e., final product's quality). Nevertheless, this approach still requires a large experimental effort, especially since no generalizable protocol exists to guide practitioners in finding the correct and optimal design among all the possible ones. Another potential solution would be to “virtually” analyze the different possible scenarios (based on input parameters) through simulation. Most simulation efforts to study the influence of build parameters are based on finite-element analysis (FEA). This technique provides an approximate solution to physics-based equations governing the phenomena occurring during the studied process. Due to its outstanding performance, FEA has been adopted in many engineering disciplines and applications to analyze phenomena such as structural mechanics, electric and magnetic fields, heat transfer, and fluid dynamics. Nevertheless, the use of this approach can also come with some disadvantages. First, the solutions provided by FEA are only approximative. Second, this method may not always capture the intricate interconnection between different mechanical-thermal and other physical behaviors taking place during the manufacturing process, which can sometimes lead to erroneous results. In addition, using FEA is challenging as it requires deep knowledge and understanding of physics laws, coding, and the different commands involved in the FEA software. Finally, these FEA simulations are expensive as they necessitate mighty computational power and may take a long time for convergence. It is with these observations in mind, among others, that various aspects of the present disclosure were conceived and developed. BRIEF DESCRIPTION OF THE DRAWINGS The present patent or application file contains at least one drawing executed in color. Copies of this patent or patent application publication with color drawing(s) will be provided by the Office upon request and payment of the necessary fee. FIG. 1 depicts images representing the influence of increasing current intensity when joining two boron steel sheets. FIG. 2 depicts images representing the influence of the number of sheets and the coating state when using the “EXP” current intensity. FIG. 3 depicts images representing the CoCr AM XCT dataset for different scan speed (v_scan) and hatch spacing (h_spacing). FIG. 4 is a schematic of the MPS-GAN for generating the RSW images. FIG. 5 is a schematic of the MPS-GAN generator's architecture. FIG. 6 is a schematic of the MPS-GAN discriminator's architecture. FIG. 7 is a comparison between real images of thermal weld nugget taken form the RSW dataset and images generated by the MPS-GAN for each combination of parameters. FIG. 8 is a comparison between real XCT images taken from the training dataset, images generated using the MPS-GAN, and images generated using the MPS-GAN-IR for each combination of parameters. FIG. 9 is a block diagram of a computer-implemented system suitable for implementing the multi-parameter simulation generative adversarial network according to embodiments disclosed herein. Corresponding reference characters indicate corresponding elements among the view of the drawings. The headings used in the figures do not limit the scope of the claims. DETAILED DESCRIPTION The present disclosure relates to example systems and methods for assessing the impact of processing parameters on the final quality of a manufactured product through the use of a multi-parameter simulation generative adversarial network