US-12620165-B2 - Systems, methods, and computer program products for populating environment models
Abstract
Systems, methods, and computer program products for managing and populating environment models are described. An environment model is accessed which represents an environment, and the environment model is populated with instances of object models. Locations where the instances of object models should be positioned in the environment model are identified, by determining where in the environment model a respective size of each instance when viewed from a vantage point at the environment model matches a size of the object represented by the respective instance when viewed from a corresponding vantage point at the environment.
Inventors
- Suzanne Gildert
- Geordie Rose
Assignees
- SANCTUARY COGNITIVE SYSTEMS CORPORATION
Dates
- Publication Date
- 20260505
- Application Date
- 20230912
Claims (17)
- 1 . A system comprising: at least one processor; and at least one non-transitory processor-readable storage medium communicatively coupled to the at least one processor, the at least one non-transitory processor-readable storage medium storing processor-executable instructions and/or data that, when executed by the at least one processor, cause the system to: access, by the at least one processor, an environment model representation of an environment; access, by the at least one processor, a first view of the environment from a first vantage point, the first vantage point having a position and a perspective in relation to the environment, wherein the first view comprises first image data having a first resolution and the first view includes an object in the environment; access, in a library of object models, an object model representation of the object, the object model including dimension data indicative of spatial dimensions of the object; and populate the environment model with an instance of the object model at a location in the environment model, wherein the processor-executable instructions which cause the system to populate the environment model with the instance of the object model at the location cause the system to: generate a second view of the environment model from a second vantage point, wherein the second view comprises second image data having a second resolution and a position and a perspective of the second vantage point in relation to the environment model substantially match the position and the perspective of the first vantage point in relation to the environment; identify the location in the environment model where a number of pixels occupied by the instance of the object model in the second image data corresponds to a number of pixels occupied by the object in the first image data; and position the instance of the object model at the location.
- 2 . The system of claim 1 , wherein: the first resolution is equal to the second resolution; and the processor-executable instructions which cause the system to identify the location in the environment model where a number of pixels occupied by the instance of the object model in the second image data corresponds to a number of pixels occupied by the object in the first image data cause the system to: identify the location in the environment model where a number of pixels occupied by the instance of the object model in the second image data is equal to a number of pixels occupied by the object in the first image data.
- 3 . The system of claim 1 , wherein: the first resolution is different from the second resolution by a fixed ratio; the processor-executable instructions which cause the system to identify the location in the environment model where a number of pixels occupied by the instance of the object model in the second image data corresponds to a number of pixels occupied by the object in the first image data cause the system to: identify the location in the environment model where a number of pixels occupied by the instance of the object model in the second image data is equal to a number of pixels occupied by the object in the first image data multiplied by the fixed ratio.
- 4 . The system of claim 1 , wherein the processor-executable instructions further cause the system to generate the object model representing the object in the library of object models.
- 5 . The system of claim 4 , wherein the processor-executable instructions which cause the system to generate the object model representing the object cause the system to: generate the object model representing the object, including the dimension data indicative of the spatial dimensions of the object.
- 6 . The system of claim 4 , further comprising at least one image sensor, wherein: the processor-executable instructions further cause the system to capture, by the at least one image sensor, image data representing the object from multiple viewpoints; and the processor-executable instructions which cause the system to generate the object model representing the object in the library of object models cause the system to generate the object model based on the captured image data from multiple viewpoints.
- 7 . The system of claim 4 , further comprising an actuatable member which contacts the object; and at least one haptic member positioned at the actuatable member, wherein: the processor-executable instructions further cause the system to capture, by the at least one haptic sensor, haptic data representing the object; and the processor-executable instructions which cause the system to generate the object model representing the object in the library of object models cause the system to generate the object model based on the captured haptic data.
- 8 . The system of claim 1 , wherein: the environment is a three-dimensional environment; the environment model is a three-dimensional environment model; the first view comprises first two-dimensional image data representing the environment from the first vantage point; the second view comprises second two-dimensional image data representing the environment model from the second vantage point; and the processor-executable instructions which cause the system to populate the environment model with the instance of the object model at the location further cause the system to, prior to identifying the location, position the instance of the object model in the second image data to correspond to a position of the object in the first image data.
- 9 . The system of claim 8 , wherein the processor-executable instructions which cause the system to populate the environment model with the instance of the object model at the location further cause the system to, prior to identifying the location, orient the instance of the object model in the second image data to correspond to an orientation of the object in the first image data.
- 10 . The system of claim 8 , wherein the processor-executable instructions further cause the system to determine a distance in the environment model between the second vantage point and the instance of the object model at the location.
- 11 . The system of claim 1 , wherein the environment is a physical environment, and the environment model is a representation of the physical environment.
- 12 . The system of claim 1 , wherein the environment is a virtual environment, and the environment model is a representation of the virtual environment.
- 13 . The system of claim 1 , further comprising a robot body positioned at the environment, wherein: the at least one processor is carried by the robot body; the at least one non-transitory processor-readable storage medium is carried by the robot body; the at least one non-transitory processor-readable storage medium stores the library of object models; and the at least one non-transitory processor-readable storage medium stores the environment model.
- 14 . The system of claim 1 , further comprising a robot controller remote from the environment, wherein: the at least one processor is positioned at the robot controller; the at least one non-transitory processor-readable storage medium is positioned at the robot controller; the at least one non-transitory processor-readable storage medium stores the library of object models; and the at least one non-transitory processor-readable storage medium stores the environment model.
- 15 . The system of claim 1 , further comprising: a robot body positioned at the environment; a robot controller remote from the robot body, the robot controller operable to provide control data to the robot body; and a communication interface which communicatively couples the robot body and the robot controller, wherein: the at least one processor is carried by the robot body; the at least one non-transitory processor-readable storage medium includes a first at least one non-transitory processor-readable storage medium carried by the robot body and a second at least one non-transitory processor-readable storage medium positioned at the robot controller; the first at least one non-transitory processor-readable storage medium stores the environment model; the second at least one non-transitory processor-readable storage medium stores the library of object models; the processor-executable instructions which cause the system to access the environment model representation of the environment cause the system to: access, by the at least one processor, the environment model stored at the first at least one non-transitory processor-readable storage medium; the processor-executable instructions which cause the system to access the first view of the environment cause the system to access, by the at least one processor, the first view of the environment stored at the first at least one non-transitory processor-readable storage medium; the processor-executable instructions which cause the system to access, in the library of object models, the object model cause the system to: access the object model in the library of models stored at the second at least one non-transitory processor-readable storage medium, via the communication interface; the processor-executable instructions which cause the system to generate a second view of the environment model from the second vantage point cause the system to: generate, by the at least one processor, the second view of the environment model; the processor-executable instructions which cause the system to identify the location in the environment model cause the system to: identify, by the at least one processor, the location in the environment model; and the processor-executable instructions which cause the system to position the instance of the object model at the location cause the system to: update, by the at least one processor, the environment model stored at the first at least one non-transitory processor-readable storage medium to include the instance of the object model at the location.
- 16 . The system of claim 1 , further comprising at least one image sensor, wherein the processor-executable instructions further cause the system to capture, by the at least one image sensor, image data representing the first view of the environment from the first vantage point.
- 17 . A system comprising: at least one processor; and at least one non-transitory processor-readable storage medium communicatively coupled to the at least one processor, the at least one non-transitory processor-readable storage medium storing processor-executable instructions and/or data that, when executed by the at least one processor, cause the system to: access, by the at least one processor, an environment model representation of an environment; access, by the at least one processor, a first view of the environment from a first vantage point, the first vantage point having a position and a perspective in relation to the environment, wherein the first view includes an object in the environment; capture, by at least one haptic sensor positioned at an actuatable member which contacts the object, haptic data representing the object; generate, based on the captured haptic data, an object model representing the object; access, in a library of object models, the object model representation of the object, the object model including dimension data indicative of spatial dimensions of the object; and populate the environment model with an instance of the object model at a location in the environment model, wherein the processor-executable instructions which cause the system to populate the environment model with the instance of the object model at the location cause the system to: generate a second view of the environment model from a second vantage point, wherein a position and a perspective of the second vantage point in relation to the environment model substantially match the position and the perspective of the first vantage point in relation to the environment; identify the location in the environment model where at least one spatial dimension of the instance of the object model in the second view of the environment model from the second vantage point substantially matches a corresponding spatial dimension of the object in the first view of the environment from the first vantage point; and position the instance of the object model at the location.
Description
TECHNICAL FIELD The present systems, methods, and computer program products generally relate to managing simulated environments, and particularly relate to populating environment models with object models. DESCRIPTION OF THE RELATED ART Simulated environments are useful in a variety of applications, including virtual or augmented reality, video games, and robotics, to name a few examples. Robots are machines that may be deployed to perform work. General purpose robots (GPRs) can be deployed in a variety of different environments, to achieve a variety of objectives or perform a variety of tasks. Robots can utilize simulated environments to operate within a physical environment. Such simulated environments should be as robust as possible through effective and selective updating of environment models, to provide information that results in optimal performance in a given environment. BRIEF SUMMARY According to a broad aspect, the present disclosure describes a method comprising: accessing, by at least one processor, an environment model representation of an environment; accessing, by the at least one processor, a first view of the environment from a first vantage point, the first vantage point having a position and a perspective in relation to the environment, wherein the first view includes an object in the environment; accessing, in a library of object models, an object model representation of the object, the object model including dimension data indicative of spatial dimensions of the object; and populating the environment model with an instance of the object model at a location in the environment model, wherein populating the environment model with the instance of the object model at the location includes: generating a second view of the environment model from a second vantage point, wherein a position and a perspective of the second vantage point in relation to the environment model substantially match the position and the perspective of the first vantage point in relation to the environment; identifying the location in the environment model where at least one spatial dimension of the instance of the object model in the second view of the environment model from the second vantage point substantially matches a corresponding spatial dimension of the object in the first view of the environment from the first vantage point; and positioning the instance of the object model at the location. The first view may comprise first image data having a first resolution; the second view may comprise second image data having a second resolution; identifying the location in the environment model where at least one spatial dimension of the instance of the object model in the second view of the environment model from the second vantage point substantially matches a corresponding spatial dimension of the object in the first view of the environment from the first vantage point may comprise: identifying the location in the environment model where a number of pixels occupied by the instance of the object model in the second image data corresponds to a number of pixels occupied by the object in the first image data. The first resolution may be equal to the second resolution; and identifying the location in the environment model where a number of pixels occupied by the instance of the object model in the second image data corresponds to a number of pixels occupied by the object in the first image data may comprise: identifying the location in the environment model where a number of pixels occupied by the instance of the object model in the second image data is equal to a number of pixels occupied by the object in the first image data. The first resolution may be different from the second resolution by a fixed ratio; identifying the location in the environment model where a number of pixels occupied by the instance of the object model in the second image data corresponds to a number of pixels occupied by the object in the first image data may comprise: identifying the location in the environment model where a number of pixels occupied by the instance of the object model in the second image data is equal to a number of pixels occupied by the object in the first image data multiplied by the fixed ratio. The method may further comprise generating the object model representing the object in the library of object models. Generating the object model representing the object may comprise generating the object model representing the object, including the dimension data indicative of the spatial dimensions of the object. The method may further comprise capturing, by at least one image sensor, image data representing the object from multiple viewpoints, and generating the object model representing the object in the library of object models may comprise generating the object model based on the captured image data from multiple viewpoints. The method may further comprise capturing, by at least one haptic sensor positioned at an actuatable member