Search

US-12626379-B1 - Modeling an environment based on limited data

US12626379B1US 12626379 B1US12626379 B1US 12626379B1US-12626379-B1

Abstract

In some implementations, a method includes obtaining environmental data corresponding to an environment. The method includes determining that the environmental data corresponding to the environment includes environmental data that corresponds to a first portion of an object that represents a sub-portion of the object and not an entirety of the object. The method includes generating a plurality of candidate point clouds for a second portion of the object based on the environmental data corresponding to the first portion of the object. The plurality of candidate point clouds are associated with corresponding confidence scores. The method includes synthesizing a model of the environment that includes a point cloud representing the first portion of the object, at least a subset of the plurality of candidate point clouds for the second portion of the object and the corresponding confidence scores associated with the subset of the plurality of candidate point clouds.

Inventors

  • Daniel L. Kovacs
  • Mark E. Drummond
  • Payal Jotwani

Assignees

  • APPLE INC.

Dates

Publication Date
20260512
Application Date
20220527

Claims (20)

  1. 1 . A method comprising: at a device including an environmental sensor, a non-transitory memory and one or more processors coupled with the environmental sensor and the non-transitory memory: obtaining, via the environmental sensor, environmental data corresponding to an environment; determining that the environmental data corresponds to a first portion of an object that represents a sub-portion of the object and not an entirety of the object; generating a plurality of candidate point clouds for a second portion of the object based on the environmental data corresponding to the first portion of the object, wherein the environmental data does not correspond to the second portion of the object and wherein the first portion of the object is in a field of detection of the environmental sensor and the second portion of the object is outside the field of detection of the environmental sensor, wherein generating the plurality of candidate point clouds comprises: providing a portion of the environmental data corresponding to the first portion of the object to a neural network system; and receiving the plurality of candidate point clouds and corresponding confidence scores as outputs of the neural network system; and synthesizing a model of the environment that includes a point cloud representing the first portion of the object and at least a subset of the plurality of candidate point clouds for the second portion of the object.
  2. 2 . The method of claim 1 , wherein generating the plurality of candidate point clouds comprises generating at least one of the plurality of candidate point clouds by extrapolating a geometric feature of the first portion of the object.
  3. 3 . The method of claim 1 , wherein generating the plurality of candidate point clouds comprises generating at least one of the plurality of candidate point clouds based on a proportion of the first portion of the object.
  4. 4 . The method of claim 1 , wherein generating the plurality of candidate point clouds comprises generating at least one of the plurality of candidate point clouds based on an interaction of the object with another object.
  5. 5 . The method of claim 1 , wherein the second portion of the object is occluded.
  6. 6 . The method of claim 1 , wherein the environment is a graphical environment, and wherein obtaining the environmental data comprises receiving image data corresponding to the graphical environment.
  7. 7 . The method of claim 1 , wherein the environmental sensor comprises a depth sensor, and wherein obtaining the environmental data comprises receiving depth data from the depth sensor.
  8. 8 . The method of claim 1 , wherein the environmental sensor comprises an image sensor, and wherein obtaining the environmental data comprises receiving image data from the image sensor.
  9. 9 . The method of claim 1 , wherein the determination that the first portion represents a sub-portion of the object and not the entirety of the object is based on a proportion of the first portion of the object.
  10. 10 . The method of claim 1 , wherein the determination that the first portion represents a sub-portion of the object and not the entirety of the object is based on at least one of: (i) one or more characteristics of the first portion of the object or (ii) a position of another object in the environment.
  11. 11 . A device comprising: an environmental sensor; one or more processors; a non-transitory memory; and one or more programs stored in the non-transitory memory, which, when executed by the one or more processors, cause the device to: obtain, via the environmental sensor, environmental data corresponding to an environment; determine that the environmental data corresponds to a first portion of an object that represents a sub-portion of the object and not an entirety of the object, wherein the determination that the first portion represents a sub-portion of the object and not the entirety of the object is based on at least one of: (i) one or more characteristics of the first portion of the object, (ii) a position of another object in the environment, or (iii) a proportion of the first portion of the object; generate a plurality of candidate point clouds for a second portion of the object based on the environmental data corresponding to the first portion of the object, wherein the environmental data does not correspond to the second portion of the object and wherein the first portion of the object is in a field of detection of the environmental sensor and the second portion of the object is outside the field of detection of the environmental sensor; and synthesize a model of the environment that includes a point cloud representing the first portion of the object and at least a subset of the plurality of candidate point clouds for the second portion of the object.
  12. 12 . The device of claim 11 , wherein generating the plurality of candidate point clouds comprises generating at least one of the plurality of candidate point clouds based on stored knowledge regarding the environment.
  13. 13 . The device of claim 11 , wherein generating the plurality of candidate point clouds comprises generating at least one of the plurality of candidate point clouds based on a position of the object within the environment.
  14. 14 . The device of claim 11 , wherein generating the plurality of candidate point clouds comprises: providing a portion of the environmental data corresponding to the first portion of the object to a neural network system; and receiving the plurality of candidate point clouds and corresponding confidence scores as outputs of the neural network system.
  15. 15 . The device of claim 11 , wherein generating the plurality of candidate point clouds comprises generating at least one of the plurality of candidate point clouds based on at least one of: (i) a proportion of the first portion of the object or (ii) an interaction of the object with another object.
  16. 16 . A non-transitory memory storing one or more programs, which, when executed by one or more processors of a device, cause the device to: obtain, via an environmental sensor, environmental data corresponding to an environment; determine that the environmental data corresponds to a first portion of an object that represents a sub-portion of the object and not an entirety of the object; generate a plurality of candidate point clouds for a second portion of the object based on the environmental data corresponding to the first portion of the object, wherein the environmental data does not correspond to the second portion of the object and wherein the first portion of the object is in a field of detection of the environmental sensor and the second portion of the object is outside the field of detection of the environmental sensor, wherein generating the plurality of candidate point clouds comprises generating at least one of the plurality of candidate point clouds based on at least one of: (i) a proportion of the first portion of the object or (ii) an interaction of the object with another object; and synthesize a model of the environment that includes a point cloud representing the first portion of the object and at least a subset of the plurality of candidate point clouds for the second portion of the object.
  17. 17 . The non-transitory memory of claim 16 , wherein generating the plurality of candidate point clouds comprises generating at least one of the plurality of candidate point clouds by extrapolating a geometric feature of the first portion of the object.
  18. 18 . The non-transitory memory of claim 16 , wherein the determination that the first portion represents a sub-portion of the object and not the entirety of the object is based on a position of another object in the environment.
  19. 19 . The non-transitory memory of claim 16 , wherein the environment is a physical environment, and wherein obtaining the environmental data comprises receiving image data or depth data corresponding to the physical environment.
  20. 20 . The non-transitory memory of claim 16 , wherein generating the plurality of candidate point clouds comprises: providing a portion of the environmental data corresponding to the first portion of the object to a neural network system; and receiving the plurality of candidate point clouds and corresponding confidence scores as outputs of the neural network system.

Description

CROSS-REFERENCE TO RELATED APPLICATION This application claims the benefit of U.S. Provisional Patent App. No. 63/211,646, filed on Jun. 17, 2021, which is incorporated by reference in its entirety. TECHNICAL FIELD The present disclosure generally relates to modeling an environment based on limited data. BACKGROUND Some devices are capable of generating and presenting graphical environments that include many objects. These objects may mimic real world objects. These environments may be presented on mobile communication devices. BRIEF DESCRIPTION OF THE DRAWINGS So that the present disclosure can be understood by those of ordinary skill in the art, a more detailed description may be had by reference to aspects of some illustrative implementations, some of which are shown in the accompanying drawings. FIG. 1A is a diagram of an example environment in accordance with some implementations. FIGS. 1B-1F are diagrams of example models of the environment in accordance with some implementations. FIG. 2 is a block diagram of a modeling system that generates a model of an environment in accordance with some implementations. FIG. 3 is a flowchart representation of a method of synthesizing a model of an environment in accordance with some implementations. FIG. 4 is a block diagram of a device that synthesizes a model of an environment in accordance with some implementations. FIGS. 5A and 5B are diagrams that illustrate example conditional plans in accordance with some implementations. FIGS. 6A and 6B are block diagrams of a planner that generates a set of conditional plans in accordance with some implementations. FIG. 7 is a flowchart representation of a method of generating a set of conditional plans in accordance with some implementations. FIG. 8 is a block diagram of a device that generates a set of conditional plans in accordance with some implementations. In accordance with common practice the various features illustrated in the drawings may not be drawn to scale. Accordingly, the dimensions of the various features may be arbitrarily expanded or reduced for clarity. In addition, some of the drawings may not depict all of the components of a given system, method or device. Finally, like reference numerals may be used to denote like features throughout the specification and figures. SUMMARY Various implementations disclosed herein include devices, systems, and methods for synthesizing a model of an environment based on limited environmental data corresponding to the environment. In some implementations, a device includes an environmental sensor, a non-transitory memory and one or more processors coupled with the environmental sensor and the non-transitory memory. In some implementations, a method includes obtaining, via the environmental sensor, environmental data corresponding to an environment. In some implementations, the method includes determining that the environmental data corresponding to the environment includes environmental data that corresponds to a first portion of an object that represents a sub-portion of the object and not an entirety of the object. In some implementations, the method includes generating a plurality of candidate point clouds for a second portion of the object based on the environmental data corresponding to the first portion of the object. In some implementations, the plurality of candidate point clouds are associated with corresponding confidence scores. In some implementations, the method includes synthesizing a model of the environment that includes a point cloud representing the first portion of the object, at least a subset of the plurality of candidate point clouds for the second portion of the object and the corresponding confidence scores associated with the subset of the plurality of candidate point clouds. Various implementations disclosed herein include devices, systems, and methods for generating a set of conditional plans for an agent based on a model that includes a plurality of candidate point clouds. In some implementations, a device includes a non-transitory memory and one or more processors coupled with the non-transitory memory. In some implementations, a method includes obtaining a model of an environment that includes a point cloud representing a first portion of an object, a plurality of candidate point clouds for a second portion of the object and corresponding confidence scores associated with the plurality of candidate point clouds. In some implementations, the method includes generating a set of conditional plans for an agent that is associated with an objective. In some implementations, each conditional plan in the set of conditional plans corresponds to a respective candidate point cloud of the plurality of point clouds in the model. In some implementations, the method includes selecting a first conditional plan from the set of conditional plans based on the corresponding confidence scores associated with the plurality of candidate point clouds. In some imp