US-12626445-B1 - Graphics rendering using a neural network
Abstract
Apparatuses, systems, and techniques to generate a surface. In at least one embodiment, one or more neural networks are used to generate a surface of an object based, at least in part, on motion of the object.
Inventors
- Sameh Khamis
- Sourav Biswas
- Kangxue Yin
- Maria Shugrina
- Sanja Fidler
Assignees
- NVIDIA CORPORATION
Dates
- Publication Date
- 20260512
- Application Date
- 20220127
Claims (20)
- 1 . One or more processors, comprising: circuitry to use one or more neural networks to generate a first surface of an object based, at least in part, on a plurality of second surfaces generated by the one or more neural networks, wherein each of the plurality of second surfaces corresponds to a specific joint involved in a motion of the object, and wherein the one or more neural networks aggregate the plurality of second surfaces to generate the first surface.
- 2 . The one or more processors of claim 1 , wherein the first surface is to be generated based, at least in part, on a signed distance field.
- 3 . The one or more processors of claim 1 , wherein the first surface is to be generated based, at least in part, on one or more joint positions of the object.
- 4 . The one or more processors of claim 1 , wherein the plurality of second surfaces are to be generated based, at least in part, on one or more randomly generated three-dimensional points.
- 5 . The one or more processors of claim 1 , wherein the first surface is to be generated based, at least in part, on one or more three-dimensional points located within a bounding volume of the object.
- 6 . The one or more processors of claim 1 , wherein the one or more neural networks include at least one signed distance field neural network.
- 7 . The one or more processors of claim 1 , wherein the one or more neural networks include at least one aggregation neural network.
- 8 . The one or more processors of claim 1 , wherein the first surface is to be generated based, at least in part, on minimizing one or more loss functions of the one or more neural networks.
- 9 . The one or more processors of claim 1 , wherein the first surface is to be generated based, at least in part, on a subject code associated with the object.
- 10 . The one or more processors of claim 1 , wherein the motion of the object is to be specified using one or more joint transformations associated with the object.
- 11 . A computer-implemented method, comprising: using one or more neural networks to generate a first surface of an object based, at least in part, on a plurality of second surfaces generated by the one or more neural networks, wherein each of the plurality of second surfaces corresponds to a specific joint involved in a motion of the object, and wherein the one or more neural networks aggregate the plurality of second surfaces to generate the first surface.
- 12 . The method of claim 11 , wherein at least one neural network of the one or more neural networks is to be trained based, at least in part, on a set of meshes associated with the object.
- 13 . The method of claim 11 , wherein at least one neural network of the one or more neural networks is to be trained based, at least in part, on a set of joint data associated with the object.
- 14 . The method of claim 11 , wherein at least one neural network of the one or more neural networks is to be trained based, at least in part, on a latent pose associated with the object.
- 15 . The method of claim 11 , wherein using the one or more neural networks to generate the first surface of the object comprises: receiving a canonical pose associated with the object; generating a test pose of the object; and determining the motion of the object based, at least in part, on one or more transformations between the canonical pose and the test pose.
- 16 . The method of claim 11 , wherein using the one or more neural networks to generate the first surface of the object comprises: determining a joint position based, at least in part, on the motion of the object; selecting a point within a bounding volume of the object; generating a signed distance field of the point based, at least in part on the joint position; and generating the first surface based, at least in part, on the signed distance field of the point.
- 17 . The method of claim 11 , wherein at least one neural network of the one or more neural networks is to be trained based, at least in part, on a loss function of the neural network.
- 18 . The method of claim 11 , wherein at least one neural network of the one or more neural networks is to be trained based, at least in part, on a least squares loss function of the neural network.
- 19 . The method of claim 11 , wherein at least one neural network of the one or more neural networks is to be trained based, at least in part, on an eikonal loss function of the neural network.
- 20 . The method of claim 11 , wherein at least one neural network of the one or more neural networks is to be trained based, at least in part, on: estimating one or more joint labels based at least in part on a geodesic distance; determining a per-joint least squares loss function of the neural network based, at least in part, on one or more estimated joint labels; and training the neural network of the one or more neural networks based at least in part, on the per-joint least squares loss function.
Description
FIELD At least one embodiment pertains to processing resources used to perform and facilitate artificial intelligence. In at least one embodiment, for example, at least one embodiment pertains to processors or computing systems used to train neural networks to perform tasks using various techniques described herein. BACKGROUND Generating a three-dimensional (3D) surface from an underlying skeletal pose can also use significant memory, time, computing resources, and human resources. In at least one embodiment, an amount of memory, time, computing resources, and human resources can be improved. BRIEF DESCRIPTION OF DRAWINGS FIG. 1 illustrates an example computer system where an implicit pose is generated using a neural network, according to at least one embodiment; FIG. 2 illustrates an example computer system where values for an implicit pose are propagated between joint neural networks, according to at least one embodiment; FIG. 3 illustrates an example process for propagating implicit pose values, according to at least one embodiment; FIG. 4 illustrates an example computer system where an implicit pose neural network is trained, according to at least one embodiment; FIG. 5 illustrates an example process for training an implicit pose neural network, according to at least one embodiment; FIG. 6 illustrates an example computer system where a trained implicit pose neural network is used to generate signed distance field values for an implicit pose surface, according to at least one embodiment; FIG. 7 illustrates an example joint position representation, according to at least one embodiment; FIG. 8 illustrates an example graph representation of a set of joint positions, according to at least one embodiment; FIG. 9 illustrates an example process for generating a signed distance field value using implicit pose neural networks, according to at least one embodiment; FIG. 10 illustrates an example graph representation of poses used by an implicit pose neural network to generate pose data, according to at least one embodiment; FIG. 11 illustrates an example graph representation of poses used by an implicit pose neural network to generate pose data using a randomly selected data point, according to at least one embodiment; FIG. 12 illustrates an example graph representation of poses used by an implicit pose neural network to generate pose data using a randomly selected data point and multiple joints, according to at least one embodiment; FIG. 13 illustrates an example computer system where a loss function of an implicit pose neural network is computed, according to at least one embodiment; FIG. 14 illustrates an example computer system where a trained implicit pose neural network is used to generate pose data for a second skeletal structure, according to at least one embodiment; FIG. 15A illustrates inference and/or training logic, according to at least one embodiment; FIG. 15B illustrates inference and/or training logic, according to at least one embodiment; FIG. 16 illustrates training and deployment of a neural network, according to at least one embodiment; FIG. 17 illustrates an example data center system, according to at least one embodiment; FIG. 18A illustrates an example of an autonomous vehicle, according to at least one embodiment; FIG. 18B illustrates an example of camera locations and fields of view for the autonomous vehicle of FIG. 18A, according to at least one embodiment; FIG. 18C is a block diagram illustrating an example system architecture for the autonomous vehicle of FIG. 18A, according to at least one embodiment; FIG. 18D is a diagram illustrating a system for communication between cloud-based server(s) and the autonomous vehicle of FIG. 18A, according to at least one embodiment; FIG. 19 is a block diagram illustrating a computer system, according to at least one embodiment; FIG. 20 is a block diagram illustrating a computer system, according to at least one embodiment; FIG. 21 illustrates a computer system, according to at least one embodiment; FIG. 22 illustrates a computer system, according to at least one embodiment; FIG. 23A illustrates a computer system, according to at least one embodiment; FIG. 23B illustrates a computer system, according to at least one embodiment; FIG. 23C illustrates a computer system, according to at least one embodiment; FIG. 23D illustrates a computer system, according to at least one embodiment; FIGS. 23E and 23F illustrate a shared programming model, according to at least one embodiment; FIG. 24 illustrates exemplary integrated circuits and associated graphics processors, according to at least one embodiment; FIGS. 25A-25B illustrate exemplary integrated circuits and associated graphics processors, according to at least one embodiment; FIGS. 26A-26B illustrate additional exemplary graphics processor logic according to at least one embodiment; FIG. 27 illustrates a computer system, according to at least one embodiment; FIG. 28A illustrates a parallel processor, acc