US-12620156-B2 - Learning of garment deformations in a collision-free space
Abstract
Systems and methods are provided that learn garment deformations such that they are essentially collision free. A diffused, volumetric body model representation of the underlying body together with the construction of a subspace for the garment model that yields a differentiable, canonical space configuration. This subspace is used for the regression of the garment model deformation and its dynamics. In this way, garment model deformations are predicted avoiding collisions, and the complexity for inference is reduced, such that a learned representation yields higher quality than previously achievable. The generated garments exhibit a large amount of spatial and temporal detail, and can be produced extremely quickly via the pre-trained networks.
Inventors
- Igor SANTESTEBAN GARAY
- Miguel Ángel OTADUY TRISTÁN
- Dan CASAS GUIX
- Nils Thuerey
Assignees
- SEDDI, INC.
Dates
- Publication Date
- 20260505
- Application Date
- 20210511
Claims (20)
- 1 . A computer-implemented method for generating a digital representation of clothing on a body, the method comprising: modeling a body with a body model; modeling the clothing with a garment model, the garment model configured to be modified based on a first set of garment model deformations due to, at least in part, the body model; finding a subspace representation of a plurality of garment model deformations, wherein each garment model deformation in the subspace representation, when applied to the garment model, does not cause the garment model to collide with the body model; training a regressor to generate an instance of a garment model deformation from the subspace representation as a function of the body model; subjecting the garment model to the regressor to deform the garment model with the instance of the garment model deformation from the subspace representation as a function of an input body shape and an input body motion; and outputting a predicted garment deformation corresponding to the input body shape and the input body motion, wherein the predicted garment deformation corresponds, at least in part, to a sliding of the clothing over the surface of the body modeled by the body model.
- 2 . The computer-implemented method of claim 1 , wherein the body model comprises body parameters, the body parameters including a body shape parameter and a motion parameter.
- 3 . The method of claim 2 , wherein the garment model is configured to be modified based on garment model deformations due to, at least in part, one or more of the body parameters.
- 4 . The computer-implemented method of claim 2 , wherein training the regressor to generate a subspace garment deformation is further as a function of the body shape parameter and the motion parameter.
- 5 . The computer-implemented method of claim 2 , wherein the body model further models a second set of garment deformations due to the body parameters and further wherein the first set of garment model deformations are represented in a canonical space that removes the second set of garment deformations from the garment model.
- 6 . The computer-implemented method of claim 5 , wherein the subspace representation comprises a generative subspace.
- 7 . The computer-implemented method of claim 6 , further comprising training the generative subspace based on training data obtained by projecting simulation data to the canonical space, wherein the simulation data comprises physically-simulated versions of the clothing using the garment model and the body model parameters.
- 8 . The computer-implemented method of claim 7 , wherein the projecting of simulation data to the canonical space comprises an optimization function that deforms the physically-simulated versions of the clothing.
- 9 . The computer-implemented method of claim 2 , further comprising: providing an output template mesh of the clothing reflecting the predicted garment deformation.
- 10 . The computer-implemented method of claim 9 , further comprising generating a digital image including the digital representation of the clothing on the body based on the output template mesh.
- 11 . The computer-implemented method of claim 9 , wherein the body model further models a second set of garment deformations due to the body parameters and further wherein the first set of garment deformations are represented in a canonical space that removes the second set of garment deformations from the garment model.
- 12 . The computer-implemented method of claim 11 , wherein providing the output template mesh further comprises projecting the predicted garment deformation from the canonical space to a body pose space.
- 13 . The computer-implemented method of claim 12 , wherein providing the output template mesh further comprises skinning the body model based on the body pose space projected predicted garment deformation by combining the second set of garment deformations from the body model with the body pose projected predicted garment deformation to generate the output template mesh.
- 14 . The computer-implemented method of claim 2 , wherein the predicted garment deformation corresponds, at least in part, to a fit of the clothing on the body based on the body shape parameter of the body model.
- 15 . The computer-implemented method of claim 1 , wherein the subspace representation comprises a generative subspace.
- 16 . The computer-implemented method of claim 15 , wherein the generative subspace is implemented, at least in part, with a variational autoencoder.
- 17 . The computer-implemented method of claim 1 , wherein the finding a subspace representation further comprises training a generative neural network.
- 18 . The computer-implemented method of claim 17 , wherein the neural network training further comprises fine-tunning the neural network based on a measure of collisions between the body model and the garment model deformed with randomly sampled garment deformations.
- 19 . The computer-implemented method of claim 1 , wherein the predicted garment deformation corresponds, at least in part, to wrinkles in the clothing due to the input body shape and the input body motion.
- 20 . The computer-implemented method of claim 1 , wherein the regressor includes one or more Gated Recurrent Units and further wherein the deforming of the garment model is based on a dynamic effect that depend on a previous input body motion.
Description
CROSS-REFERENCE TO RELATED APPLICATIONS The present application is a national phase entry under 35 U.S.C. § 371 of International Patent Application No. PCT/ES2021/070325 titled “Learning of Garment Deformations in a Collision-Free Space,” filed on May 11, 2021, all of which is incorporated herein by reference in its entirety. BACKGROUND This disclosure generally relates to computer modeling systems, and more specifically to a system and method for simulating clothing to provide a data-driven model for animation of clothing for virtual try-on. Clothing plays a fundamental role in our everyday lives. When we choose clothing to buy or wear, we guide our decisions based on a combination of fit and style. For this reason, the majority of clothing is purchased at brick-and-mortar retail stores, after physical try-on to test the fit and style of several garments on our own bodies. Computer graphics technology promises an opportunity to support online shopping through virtual try-on animation, but to date virtual try-on solutions lack the responsiveness of a physical try-on experience. Beyond online shopping, responsive animation of clothing has an impact on fashion design, video games, and interactive graphics applications as a whole. One approach to produce animations of clothing is to simulate the physics of garments in contact with the body. While this approach has proven capable of generating highly detailed results [85, 94, 89, 77], it comes at the expense of significant runtime computational cost. On the other hand, it bears no or little preprocessing cost, hence it can be quickly deployed on almost arbitrary combinations of garments and body shapes and motions. To fight the high computational cost, interactive solutions sacrifice accuracy in the form of coarse cloth discretizations, simplified cloth mechanics, or approximate integration methods. Continued progress on the performance of solvers is bringing the approach closer to the performance needs of virtual try-on [59]. An alternative approach for cloth animation is to train a data-driven model that computes cloth deformation as a function of body motion [95, 78]. This approach succeeds to produce plausible cloth folds and wrinkles when there is a strong correlation between body pose and cloth deformation. However, it struggles to represent the nonlinear behavior of cloth deformation and contact in general. Most data-driven methods rely to a certain extent on linear techniques, hence the resulting wrinkles deform in a seemingly linear manner (e.g., with blending artifacts) and therefore lack realism. Most previous data-driven cloth animation methods work for a given garment-avatar pair, and are limited to representing the influence of body pose on cloth deformation. In virtual try-on, however, a garment may be worn by a diverse set of people, with corresponding avatar models covering a range of body shapes. Other methods that account for changes in body shape do not deform the garment in a realistic way, and either resize the garment while preserving its style [15, 76], or retarget cloth wrinkles to bodies of different shapes [42, 87]. These prior techniques rely on some approaches that are the basis upon which the present virtual try-on disclosure improves, including some forms of physics-based simulation, early data-driven models, and related work that is further described below. For example, conventional physics-based simulation of clothing entails three major processes: computation of internal cloth forces, collision detection, and collision response; and the total simulation cost results from the combined influence of the three processes. One attempt to limit the cost of simulation has been to approximate dynamics, such as in the case of position-based dynamics [3]. While approximate methods produce plausible and expressive results for video game applications, they cannot transmit the realistic cloth behavior needed for virtual try-on. Another line of work, which tries to retain simulation accuracy, is to handle efficiently both internal forces and collision constraints during time integration. One example is a fast GPU-based Gauss-Seidel solver of constrained dynamics [12]. Another example is the efficient handling of nonlinearities and dynamically changing constraints as a superset of projective dynamics [90]. More recently, Tang et al. [59]designed a GPU-based solver of cloth dynamics with impact zones, efficiently integrated with GPU-based continuous collision detection. A different approach to speed up cloth simulation is to apply adaptive remeshing, focusing simulation complexity where needed [89]. Similar in spirit, Eulerian-on-Lagrangian cloth simulation applies remeshing with Eulerian coordinates to efficiently resolve the geometry of sharp sliding contacts [96]. Similarly, inspired by early works that model surface deformations as a function of pose [LCF00, SRIC01], some existing data-driven methods for clothing animation also use the unde