Search

US-12626298-B2 - Rendering 3D model data for prioritized placement of 3D models in a 3D virtual environment

US12626298B2US 12626298 B2US12626298 B2US 12626298B2US-12626298-B2

Abstract

Systems and methods for composing a virtual environment are provided. The virtual environment may represent a room or other space having specified parameters. The system may facilitate placement of 3D models of objects in the virtual environment, where the 3D models correspond to one or more objects depicted in a 2D image, and the placement of the 3D models is algorithmically determined based on placement rules to generate an arrangement of the 3D models based on a layout of the one or more objects in the 2D image. The system can execute the placement rules to algorithmically determine placement locations of the 3D models corresponding to objects depicted in the 2D image.

Inventors

  • Barry Besecker
  • Anna WITTEKIND
  • Ryan ROCHE
  • Matthew Robert MOONEY
  • Ian NEWLAND
  • Jonathan Jekeli
  • Jayson Alan HILBORN

Assignees

  • MARXENT LABS LLC

Dates

Publication Date
20260512
Application Date
20231115

Claims (20)

  1. 1 . A computer-implemented method for composing a virtual environment representing a room or other space having specified parameters, facilitating placement of 3D models of objects in the virtual environment, where the 3D models correspond to objects depicted in a 2D image, and the placement of the 3D models is algorithmically determined based on placement rules to generate an arrangement of the 3D models based on a layout of the objects in the 2D image, the computer-implemented method being performed by one or more processors programmed with computer instructions which, when executed, caused the one or more processors to perform the steps of: executing the placement rules to algorithmically determine placement locations of the 3D models corresponding to the objects depicted in the 2D image, wherein the placement locations of the 3D models includes placement of at least one 3D model in relation to a mounting point in the room or other space, wherein the placement of the at least one 3D model maintains a spatial relationship between the 3D models when at least one 3D model is moved in the virtual environment, wherein the spatial relationship between the 3D models are automatically associated as a group with stored spatial relationships, such that in response to a user input to move or manipulate one 3D model of the group within the virtual environment, other 3D models of the group are moved in a manner that maintains the stored spatial relationships among all 3D models in the group; and wherein executing the placement rules algorithmically determines the placement of other 3D models corresponding to the objects depicted in the 2D image.
  2. 2 . The computer-implemented method of claim 1 , further comprising: storing, in a computer memory, a set of 2D images; storing, in a computer memory, 3D model data including wireframes for rendering 3D models that correspond to the objects and physical attributes of the objects; preprocessing the set of 2D images to identify the objects, physical attributes of the objects and layouts of the objects in individual ones of the 2D images and mapping the objects to the stored 3D model data; and storing, in a computer memory, a set of placement rules for maintaining spatial relationships that limit placement of the layout of the 3D models.
  3. 3 . The computer-implemented method of claim 1 , further comprising: receiving a user selection of one 2D image of a set of 2D images; and algorithmically determine an arrangement of 3D models that correspond to the objects based on at least in part on the layout of the objects in the user selection of the one 2D image and specified parameters of the virtual environment to facilitate an automated initial arrangement of the 3D models in the virtual environment, without further input by the user after the user selection of the one 2D image of the set of 2D images.
  4. 4 . The computer-implemented method of claim 1 , further comprising: generating 3D models corresponding to the objects depicted in the 2D image based at least in part on 3D model data mapped to at least some of the objects depicted in the 2D image; and displaying in the virtual environment, via a graphical user interface, the generated 3D models at corresponding determined placement locations, wherein one portion of the graphical user interface is configured to display the virtual environment and another portion of the graphical user interface is configured to display the 2D image.
  5. 5 . The computer-implemented method of claim 1 , further comprising: algorithmically determining an arrangement of 3D models that correspond to the objects based on at least in part on stored physical attributes of the objects and specified parameters of the virtual environment.
  6. 6 . The computer-implemented method of claim 5 , further comprising: generating an on-demand render of the 3D models for display on a graphical user interface based on the stored physical attributes of corresponding objects, wherein generating an on-demand render comprises: displaying, via the graphical user interface, the 3D models corresponding to the objects depicted in the 2D image based at least in part on 3D model data mapped to at least some of the objects depicted in the 2D image; providing, via the graphical user interface, options for altering the generated on demand render of the 3D models according to the stored physical attributes of the corresponding objects; and in response to receiving a selection option, generating, via a rendering platform, an on-demand render of the 3D model based on the selection option.
  7. 7 . The computer-implemented method of claim 1 , further comprising: algorithmically determining an arrangement of 3D models that correspond to the objects based on at least in part on the layout of the objects from the 2D image and specified parameters of the virtual environment to facilitate automated arrangement of the 3D models in the virtual environment.
  8. 8 . The computer-implemented method of claim 1 , further comprising: storing in a product information database, product information, for products corresponding to the objects depicted in a set of 2D images, the product information including a product identifier to relate an object to a known purchasable object, information to associate the object in the 2D image with a product in a product information database and purchase information indicating where the product is available for purchase.
  9. 9 . The computer-implemented method of claim 1 , wherein a set of 2D images include a visual representations of rooms depicted in an image, including objects in the room, layouts of the objects including a location of the objects within the room and relative location of the objects with respect to one another, and the method further comprising: storing, in an image database, image metadata information in association with the 2D image, the image metadata information including information identifying the objects in the image and layout information of the objects in the image.
  10. 10 . The computer-implemented method of claim 9 , wherein image metadata information comprises attribute information about the image, including a category type of the room or other space depicted in the image and décor style information.
  11. 11 . The computer-implemented method of claim 9 , wherein image metadata information comprises information for identifying an object in the 2D image and metadata about the 2D image or objects, including physical attributes of the room in the 2D image and physical attributes of the objects.
  12. 12 . The computer-implemented method of claim 1 , wherein the placement rules comprise computer-readable instructions to place the 3D models according to stored physical attributes of corresponding objects and specified parameters of the virtual environment.
  13. 13 . The computer-implemented method of claim 1 , further comprising: applying rendering rules based on stored object information, including rules for rendering a 3D model of an object with respect to that of another object.
  14. 14 . The computer-implemented method of claim 1 , further comprising: executing a preprocessing module to process a set of 2D images to create and store image information, identify objects, physical attributes of the objects and a layout of the objects in the 2D image, map identified objects to 3D model data, and store the mapped objects in one or more storage devices.
  15. 15 . The computer-implemented method of claim 1 , further comprising: displaying one or more graphical user interfaces configured to display in a first user interface portion and user interface tools to enable the user to search and/or scroll through a sets of 2D images and select at least one of the set of 2D images and to automatically display in a second user interface portion the virtual environment and the objects from the 2D image.
  16. 16 . The computer-implemented method of claim 1 , further comprising: algorithmically determining the placement of rendered 3D models of the objects in the virtual environment by identifying usable spaces in the virtual environment, selecting a starting object, identifying a mounting point for placing a 3D model of the starting object, determining remaining space and selecting a second object and identifying a location for placement of a 3D model of the second object, wherein the placement of the 3D models corresponding to the starting object and the second object is determined in accordance with a stored set of placement rules.
  17. 17 . The computer-implemented method of claim 1 , further comprising: storing metadata in association with a stored 2D image, wherein the metadata comprises physical attributes of the objects in the stored 2D image, and wherein the placement locations are based at least in part on the physical attributes of the objects.
  18. 18 . The computer-implemented method of claim 1 , further comprising: displaying, via a graphical user interface, objects identified from one or more 2D images; in response to receiving user selection of the objects identified from the one or more 2D images, identifying 3D model data for the objects; and displaying in the virtual environment, via a graphical user interface, a 3D model of the objects.
  19. 19 . The computer-implemented method of claim 1 , further comprising: identifying, by an image analyzer, limitations of the virtual environment, wherein limitations of the virtual environment comprise non-usable space or gaps in the virtual environment; and in response to identifying limitations of the virtual environment, identifying one or more resources to fill gaps in the virtual environment between the 3D models placed in the virtual environment; automatically detecting, after placement of a group of 3D models in the virtual environment, any spatial gaps or non-usable space within the arrangement of grouped 3D models; and in response to detecting a gap, inserting one or more additional 3D models or objects to fill the gap, wherein the inserted one or more additional 3D models are positioned to maintain the stored spatial relationships of the group.
  20. 20 . The computer-implemented method of claim 1 , further comprising: automatically generating different resolutions for the 3D models placed in the virtual environment based on rendering rules for room layouts.

Description

CROSS REFERENCE TO RELATED APPLICATIONS This application is a continuation of U.S. patent application Ser. No. 18/063,365, filed Dec. 8, 2022, which claims the benefit of U.S. Patent Application No. 63/287,306, filed Dec. 8, 2021, the entire contents of which are incorporated herein by reference in their entirety. This application is related to U.S. Pat. No. 10,672,191 issued Jun. 6, 2020, U.S. Pat. No. 10,600,255 issued Mar. 24, 2020, U.S. Pat. No. 11,049,317 issued Jun. 29, 2021, and U.S. Pat. No. 11,544,901 issued Jan. 3, 2023, the entire contents of each application are incorporated herein by reference in its entirety. Aspects of the invention described below use concepts described in the above-referenced patents and applications to implement features described herein. Additionally, the concepts described in the above-referenced patents and applications may be used in various combinations with the concepts described herein. TECHNICAL FIELD The disclosed technology relates to computer-implemented system including a three-dimensional (3D) room designer application that computer-generates an algorithmically determined arrangement of 3D models of objects in a virtual environment, based on using image analysis (or other technology) to identify objects in a user-selected two-dimensional (2D) image depicting a layout of objects (e.g., furniture or other objects), where the arrangement determination corresponds at least in part to stored physical attributes of the corresponding objects and specified parameters of the virtual environment where the algorithm can adapt the depicted layout of objects from the image based on the parameters of the virtual environment to facilitate automated arrangement, even when the size, shape and/or other parameters of the room in the 2D image differ from the size, shape and/or other parameters of the virtual environment. BACKGROUND Various room design tools exist. These design tools suffer from various well known drawbacks and limitations. Efforts to simplify the process of using these tools, in part, by basing a room design on a 2D photograph have been proposed, but this too has several technical limitations. For example, often the exact configuration of the objects in the photo will not work in the room being designed. For example, the size, shape and/or other parameters of the room in the 2D photograph may differ from the size, shape and/or other parameters of the virtual environment. Additionally, in some cases, users need to manually select individual objects within a room and select a location at which the object should be placed in the virtual environment. This can be tedious, time consuming and frustrating for users. These and other technical limitations adversely impact tools that propose basing room design on a layout in a photograph. As used herein a photograph can include a digital image. BRIEF SUMMARY OF EMBODIMENTS Systems, methods, and computer-readable media are disclosed for facilitating room design, including computer generating an algorithmically determined arrangement of 3D models in a virtual environment, based on a 2D image depicting a layout of objects (e.g., furniture or other objects in a 2D image), where the determination is based at least in part on physical attributes of the corresponding objects, the size, shape and/or other parameters of the room in the 2D image and the size, shape and/or other parameters of the virtual environment. In some embodiments, upon user selection of a 2D image, the system will create (if not already created) 3D models of objects in the selected image and automatically render the 3D models in a virtual environment at locations determined by placement and/or rendering rules, all without requiring the user to select individual objects in the image and/or locations for the objects as is typical with prior systems. The system includes a remote computing device with a master resource database (e.g., for storing 3D model data) and master layout database (e.g., for storing images and/or image information for images depicting layouts of objects), and a local computing device with a resource manager, local resource database, rendering engine, and 3D room designer application. The remote computing device and/or the local computing device may include one or more storage devices for storing 2D images and image metadata, 3D model data and product or other object information. The stored 2D images may include a visual representations of rooms depicted in an image, including objects in the room, layouts of the objects and/or other information. The layout may include the location of the objects within the room and the relative location of the objects with respect to one another. Image metadata information stored in association with the 2D image may include information about objects in the image, layout information of the objects in the image, attribute information about the image, including for example, a category type of the room or other s