Search

US-12620143-B2 - System and method for high-quality renderings of synthetic views of custom products

US12620143B2US 12620143 B2US12620143 B2US 12620143B2US-12620143-B2

Abstract

In some embodiments, a data processing method for generating a synthetic view rendering of a custom product, the method comprises: initializing, for a digital asset, a coverage mapping array; wherein each member of the coverage mapping array represents a possible coverage of sub-pixel regions of a pixel, and can be held in a scalar instruction register of a computer microprocessor; calculating a distance of each edge of the triangle from a center of a sub-pixel using a single scalar set instruction, operating on a pair of two scalar instruction set registers; using the distance and an angle of the edge to select a member of the array; combining pairs of found members of the coverage mapping array using the single scalar set instruction, operating on the pair, to assemble a sub-pixel coverage map array; rendering the pixel using the sub-pixel coverage mapping array and the sub-pixel regions of the pixel.

Inventors

  • Leslie Young Harvill

Assignees

  • ZAZZLE INC.

Dates

Publication Date
20260505
Application Date
20240119

Claims (20)

  1. 1 . A data processing method for generating a synthetic view rendering, the method comprising: initializing, for a digital asset, a coverage mapping array; wherein each member of the coverage mapping array represents a possible coverage of sub-pixel regions of a pixel; wherein each member of the coverage mapping array can be held in a scalar instruction register of a computer microprocessor; determining a contribution of a triangle to a pixel region by: calculating a distance of each edge of the triangle from a center of a sub-pixel using a single scalar set instruction, operating on a pair of two scalar instruction set registers; using the distance and an angle of the edge to select a member of the coverage mapping array; combining pairs of found members of the coverage mapping array using the single scalar set instruction, operating on the pair, to assemble a sub-pixel coverage map array; rendering the pixel using the sub-pixel coverage mapping array and the sub-pixel regions of the pixel.
  2. 2 . The data processing method of claim 1 , further comprising: generating a final image by: initializing a coverage buffer; for each surface of a calibrated product rendering asset: biding one or more surface appearance assets to the surface to generate a converted surface; scanning the converted surface to generate a plurality of pixel samples and storing the plurality of pixels samples in the coverage buffer; evaluating the plurality of pixel samples to generate and update the coverage mapping array; converting the coverage mapping array to the final image.
  3. 3 . The data processing method of claim 2 , wherein the coverage mapping array provides an evaluation of an ImplicitTriangle and provides a map for a 16×16 sub pixel area; wherein the coverage buffer is a 2D array of 32 bit integers that are a start of an indexed list of CoverPix structures.
  4. 4 . The data processing method of claim 2 , wherein the calibrated product rendering asset is generated by creating a digital representation of a referenced physical product; wherein the referenced physical product is generated using designated colors and patterns; wherein the designated colors and patterns include markups; wherein a markup is used to construct geometry, design areas, masks, local surface shadings, and global luminance shadings in the digital representation of the referenced physical product; wherein the geometry, and the global luminance shadings are captured using a plurality of product option parametric key-values.
  5. 5 . The data processing method of claim 4 , further comprising applying the plurality of product option parametric key-values to the digital asset by: setting a substrate, trim, and key-values to the plurality of product option parametric key-values associated with color or textural appearance of the referenced physical product; wherein a substrate corresponds to a material from which the referenced physical product is made; wherein the color and textual appearance are set by references; wherein the trim is defined by trim geometry features and trim placements; wherein the trim geometry features and the trim placements are transformed by fitting a framing or an edging to the trim geometry features and setting physical profiles of the framing or the edging.
  6. 6 . The data processing method of claim 4 , wherein applying the plurality of product option parametric key-values to the digital asset comprises: generating a plurality of design areas associated with the calibrated product renderings asset, rendering an image in a design area, of the plurality of design areas, using a design U-V geometry specified by a markup.
  7. 7 . The data processing method of claim 4 , wherein applying the plurality of product option parametric key-values to the digital asset comprises: setting the plurality of product option parametric key-values by transforming geometry features and placements associated with the calibrated product rendering asset; wherein keys of the plurality of product option parametric key-values include one or more of: a product height, a product width, a product depth, a product circumference, a placement of designs, a design height, or a design width; wherein setting the plurality of product option parametric key-values comprises setting geometry transforms to a specific view and setting in-situ transforms; wherein for each polygon in the calibrated product rendering asset, a hybrid scanline and an implicit structure are built; wherein building the implicit structure comprises determining triangle points, triangle deltas, and implicit triangles for the calibrated product rendering asset.
  8. 8 . A raster cover computer system, comprising: a memory unit; one or more processors; and a raster cover computer performing: initializing, for a digital asset, a coverage mapping array; wherein each member of the coverage mapping array represents a possible coverage of sub-pixel regions of a pixel; wherein each member of the coverage mapping array can be held in a scalar instruction register of a computer microprocessor; determining a contribution of a triangle to a pixel region by: calculating a distance of each edge of the triangle from a center of a sub-pixel using a single scalar set instruction, operating on a pair of two scalar instruction set registers; using the distance and an angle of the edge to select a member of the coverage mapping array; combining pairs of found members of the coverage mapping array using the single scalar set instruction, operating on the pair, to assemble a sub-pixel coverage map array; rendering the pixel using the sub-pixel coverage mapping array and the sub-pixel regions of the pixel.
  9. 9 . The raster cover computer system of claim 8 , wherein the raster cover computer further generates a final image by: initializing a coverage buffer; for each surface of a calibrated product rendering asset: biding one or more surface appearance assets to the surface to generate a converted surface; scanning the converted surface to generate a plurality of pixel samples and storing the plurality of pixels samples in the coverage buffer; evaluating the plurality of pixel samples to generate and update the coverage mapping array; converting the coverage mapping array to the final image.
  10. 10 . The raster cover computer system of claim 9 , wherein the coverage mapping array provides an evaluation of an ImplicitTriangle and provides a map for a 16×16 sub pixel area; wherein the coverage buffer is a 2D array of 32 bit integers that are a start of an indexed list of CoverPix structures.
  11. 11 . The raster cover computer system of claim 9 , wherein the calibrated product rendering asset is generated by creating a digital representation of a referenced physical product; wherein the referenced physical product is generated using designated colors and patterns; wherein the designated colors and patterns include markups; wherein a markup is used to construct geometry, design areas, masks, local surface shadings, and global luminance shadings in the digital representation of the referenced physical product; wherein the geometry, and the global luminance shadings are captured using a plurality of product option parametric key-values.
  12. 12 . The raster cover computer system of claim 11 , storing additional instructions for applying the plurality of product option parametric key-values to the digital asset comprises: setting a substrate, trim, and key-values to the plurality of product option parametric key-values associated with color or textural appearance of the referenced physical product; wherein a substrate corresponds to a material from which the referenced physical product is made; wherein the color and textual appearance are set by references; wherein the trim is defined by trim geometry features and trim placements; wherein the trim geometry features and the trim placements are transformed by fitting a framing or an edging to the trim geometry features and setting physical profiles of the framing or the edging.
  13. 13 . The raster cover computer system of claim 11 , wherein applying the plurality of product option parametric key-values to the digital asset comprises: generating a plurality of design areas associated with the calibrated product renderings asset, rendering an image in a design area, of the plurality of design areas, using a design U-V geometry specified by a markup.
  14. 14 . The raster cover computer system of claim 11 , wherein applying the plurality of product option parametric key-values to the digital asset comprises: setting the plurality of product option parametric key-values by transforming geometry features and placements associated with the calibrated product rendering asset; wherein keys of the plurality of product option parametric key-values include one or more of: a product height, a product width, a product depth, a product circumference, a placement of designs, a design height, or a design width; wherein setting the plurality of product option parametric key-values comprises setting geometry transforms to a specific view and setting in-situ transforms; wherein for each polygon in the calibrated product rendering asset, a hybrid scanline and an implicit structure are built; wherein building the implicit structure comprises determining triangle points, triangle deltas, and implicit triangles for the calibrated product rendering asset.
  15. 15 . One or more computer-readable non-transitory data storage media storing one or more sequences of instructions which, when executed by one or more computer processors, cause the one or more computer processors to perform: initializing, for a digital asset, a coverage mapping array; wherein each member of the coverage mapping array represents a possible coverage of sub-pixel regions of a pixel; wherein each member of the coverage mapping array can be held in a scalar instruction register of a computer microprocessor; determining a contribution of a triangle to a pixel region by: calculating a distance of each edge of the triangle from a center of a sub-pixel using a single scalar set instruction, operating on a pair of two scalar instruction set registers; using the distance and an angle of the edge to select a member of the coverage mapping array; combining pairs of found members of the coverage mapping array using the single scalar set instruction, operating on the pair, to assemble a sub-pixel coverage map array; rendering the pixel using the sub-pixel coverage mapping array and the sub-pixel regions of the pixel.
  16. 16 . The one or more computer-readable non-transitory data storage media of claim 15 , storing additional instructions for generating a final image by: initializing a coverage buffer; for each surface of the calibrated product rendering asset: biding one or more surface appearance assets to the surface to generate a converted surface; scanning the converted surface to generate a plurality of pixel samples and storing the plurality of pixels samples in the coverage buffer; evaluating the plurality of pixel samples to generate and update the coverage mapping array; converting the coverage mapping array to the final image.
  17. 17 . The one or more computer-readable non-transitory data storage media of claim 16 , wherein the coverage mapping array provides an evaluation of an ImplicitTriangle and provides a map for a 16×16 sub pixel area; wherein the coverage buffer is a 2D array of 32 bit integers that are a start of an indexed list of CoverPix structures.
  18. 18 . The one or more computer-readable non-transitory data storage media of claim 16 , wherein the calibrated product rendering asset is generated by creating a digital representation of a referenced physical product; wherein the referenced physical product is generated using designated colors and patterns; wherein the designated colors and patterns include markups; wherein a markup is used to construct geometry, design areas, masks, local surface shadings, and global luminance shadings in the digital representation of the referenced physical product; wherein the geometry, and the global luminance shadings are captured using a plurality of product option parametric key-values.
  19. 19 . The one or more computer-readable non-transitory data storage media of claim 18 , storing additional instructions for applying the plurality of product option parametric key-values to the digital asset comprises: setting a substrate, trim, and key-values to the plurality of product option parametric key-values associated with color or textural appearance of the referenced physical product; wherein a substrate corresponds to a material from which the referenced physical product is made; wherein the color and textual appearance are set by references; wherein the trim is defined by trim geometry features and trim placements; wherein the trim geometry features and the trim placements are transformed by fitting a framing or an edging to the trim geometry features and setting physical profiles of the framing or the edging.
  20. 20 . The one or more computer-readable non-transitory data storage media of claim 18 , wherein applying the plurality of product option parametric key-values to the digital asset comprises: generating a plurality of design areas associated with the calibrated product renderings asset, rendering an image in a design area, of the plurality of design areas, using a design U-V geometry specified by a markup.

Description

FIELD OF THE DISCLOSURE One technical field of disclosure is an approach for automatically configuring custom product options based on user actions monitored and tracked by collaborative computer platforms. Another technical field is tracking the user actions to generate options for customizing products available from the collaborative computer platforms and generating, based on the options, high-quality renderings of synthetic views of custom products. BACKGROUND The approaches described in this section are approaches that could be pursued but not necessarily approaches that have been previously conceived or pursued. Therefore, unless otherwise indicated, it should not be assumed that any approaches described in this section qualify as prior art merely by their inclusion. Many systems on the market are configured to offer the opportunity to order products with customized attributes. For example, in the case of manufactured framed products such as photos, digital images, artwork, and other frameable products, the systems may offer the opportunity to order the images and frames in customized sizes and colors. Customizing products that have many customizable parameters may be quite challenging. The selection of customization values may have implications on the appearance of the final custom products and how the final custom products are rendered. Therefore, the systems often provide functionalities for displaying depictions, i.e., synthetic views, of the customized products to help the users visualize their customized products before the users order the products. An example of the product customization process is described in, for example, U.S. Pat. No. 8,175,931 B2, which includes a description of an example user product renderer in FIG. 1B (element 109b), FIG. 1C (element 124), and FIG. 2 (element 134). However, the product customization process described in U.S. Pat. No. 8,175,931 B2 does not disclose the high-quality sub-pixel rendering approach. Generally, synthetic views are digital depictions of the objects displayed on computer-based display devices. In the context of digital customization of products, it is useful to render synthetic views of the products before the products are manufactured. This allows a user to visually check the product features and decorations before actually ordering the product. Synthetic views are often a combination of imagery from digital photography. They may include, for example, digital markups and synthetic renderings derived from, for example, 2D, 2.5D, and 3D geometry of the objects. Algorithms for high-quality digital rendering of geometry have been researched and studied for some time. They typically use simulation of light, texture, and color. Major advancements in this technology include work using Scanline Rendering, Binary Space partitioning, zBuffer, aBuffer, Pixar's Reyes rendering system (culminating in the Renderman tool), the wide availability of hardware supporting OpenGL and Direct3D, and improvements in hardware assisted ray-tracing, as implemented in, for example, Intel's Embree rendering system. Usually, the synthetic digital rendering methods may be grouped based on the application area, rendering speed, and quality needs. For example, the real-time rendering applications for simulation and games typically use carefully designed content and geometry rendered with optimized spatial partitioning on hardware using OpenGL or Direct3D. The rendering time for a frame in a real-time rendering application must be rapid, and usually, the latency appears to be a key barrier to supporting user interactions with the application. In the entertainment industry, production of imagery for films and prints usually has strict requirements for the quality and creative control. Sometimes, it might be difficult to meet the high quality and artistic requirements. But even if those requirements are met, they are met at the expense of longer rendering times per image. Rendering synthetic views for custom products falls between these two applications. Such renderings need to be crafted without the expense incurred by optimizing game assets and performed as a user interacts with a product, longer than a twitch game, but much shorter than a movie frame. Therefore, there is a need for providing high-quality rendering techniques for rendering synthetic views of custom products that provide relatively low latencies. BRIEF DESCRIPTION OF DRAWINGS FIG. 1A illustrates an example system for implementing a method for automatically configuring custom product options based on user actions, according to some embodiments; FIG. 1B illustrates an example color map; FIG. 1C illustrates an example color map; FIG. 1D illustrates an example color map; FIG. 1E illustrates an example color map; FIG. 1F illustrates an example color map; FIG. 2 illustrates an example processing flow for rendering synthetic views, according to some embodiments; FIG. 3 illustrates an example of a user interface of a p