US-12619240-B1 - Reality capture robot for generating a map of an environment
Abstract
A robotic device scans at least a portion of an environment. The robotic device includes one or more LiDAR sensors to scan the environment. A management system receives sensor data generated by the one or more LiDAR sensors to determine a three dimensional (3D) point cloud representing at least the portion of the environment. A layout of the environment is determined, where the layout represents a location of objects within the environment. At least one discrepancy between the 3D spatial map and the layout is determined. Based at least in part on at least one discrepancy, a map of the environment is generated.
Inventors
- Aayush Aggarwal
- Vishnu R. Ayyagari
- Kyle Thomas Auger
- Loan Thi Tuong Le
Assignees
- AMAZON TECHNOLOGIES, INC.
Dates
- Publication Date
- 20260505
- Application Date
- 20220329
Claims (15)
- 1 . A method comprising: determining, by one or more computer processors coupled to memory, at least a portion of an environment to be scanned by a robotic device comprising one or more three dimensional (3D) imaging sensors; causing the robotic device to autonomously scan the at least the portion of the environment using the one or more 3D imaging sensors; receiving, from the robotic device, sensor data generated by the one or more 3D imaging sensors; generating, based at least in part on the sensor data, a 3D spatial map representing the at least the portion of the environment; identifying, based at least in part on layout data, a layout of the environment, the layout including: a first location of a first object within the environment, and a first representation associated with the first object; determining at least one discrepancy between the 3D spatial map and the layout, the at least one discrepancy including a second location of a second object within the environment, the second object being the same as the first object; determining, based at least in part on the first representation, a second representation associated with the second object within the 3D spatial map; determining a third object and a fourth object within the 3D spatial map; determining that the third object represents a static object within the environment; determining that the fourth object represents a dynamic object within the environment; generating, based at least in part on the at least one discrepancy, a map of the environment, the map including the second representation associated with the second object and the third object, wherein the map omits the fourth object; determining, based at least in part on the map, a route for the second robotic device to complete a task within the environment; and sending route data associated with the route to the second robotic device, wherein the second robotic device uses the route data to travel along the route.
- 2 . The method of claim 1 , further comprising sending map data associated with the map to a second robotic device, wherein the second robotic device is configured to utilize the map data to traverse the environment.
- 3 . The method of claim 1 , wherein: the 3D spatial map represents a first 3D map of the environment; the layout represents a second 3D map of the environment; and the map represents a third 3D map of the environment.
- 4 . The method of claim 1 , wherein the sensor data is based at least in part on: first sensor data generated by the one or more 3D imaging sensors at a third location of the robotic device within the at least the portion of the environment; and second sensor data generated by the one or more 3D imaging sensors at a fourth location of the robotic device within the at least the portion of the environment.
- 5 . The method of claim 1 , wherein the static object is one of a frame, structure, or pillar.
- 6 . The method of claim 1 , further comprising: determining an amount of time that has elapsed since generating the map; determining that the amount of time is greater than a threshold; and based at least in part on the amount of time being greater than the threshold, causing the robotic device or a second robotic device to scan at least a second portion of the environment.
- 7 . A method comprising: determining at least a portion of an environment to be scanned by a first robotic device; receiving, from the first robotic device, sensor data generated by one or more sensors of the first robotic device; generating, based at least in part on the sensor data, a three dimensional (3D) spatial map representing the at least the portion of the environment; identifying, based at least in part on layout data, a layout of the environment, the layout being representative of one or more objects within the environment; aligning the 3D spatial map and at least a portion of the layout; determining, based at least in part on aligning the 3D spatial map and the at least the portion of the layout, a discrepancy between a first location of a first instance of an object in the 3D spatial map and a second location of a second instance of the object in the layout; determining a second object and a third object within the 3D spatial map, wherein the second object represents a static object within the environment, and the third object represents a dynamic object within the environment; generating, based at least in part on the discrepancy, a map of the environment including the object and the second object, and wherein the map omits the third object; determining a first classifier associated with the object in the layout; determining, based at least in part on the first classifier, a second classifier of the object; sending, to a second robotic device, data associated with the map, the second robotic device being configured to utilize the data to traverse the environment; determining, based at least in part on the map, a route for the second robotic device to complete a task within the environment; and sending route data associated with the route to the second robotic device, wherein the second robotic device uses the route data to travel along the route.
- 8 . The method of claim 7 , further comprising: determining a route along which the first robotic device is to travel to scan the at least the portion of the environment; and sending route data associated with the route to the first robotic device, wherein the first robotic device uses the route data to autonomously travel along the route.
- 9 . The method of claim 8 , wherein: the sensor data is received as the first robotic device travels along the route; or the sensor data is received upon the first robotic device completing the route.
- 10 . The method of claim 7 , wherein the map includes the second classifier of the object.
- 11 . The method of claim 7 , further comprising: providing, as an input to a machine-learned model, the sensor data; receiving, as an output of the machine-learned model, an indication of a first classifier of the first instance of the object; and determining, based at least in part on the layout data, a second classifier of the second instance of the object, wherein aligning the 3D spatial map with the at least the portion of the layout is based at least in part on the first classifier and the second classifier.
- 12 . The method of claim 7 , further comprising: determining at least one of: a first origin associated with the 3D spatial map, or a first reference point within the 3D spatial map; and determining at least one of: a second origin associated with the at least the portion of the layout, or a second reference point within the at least the portion of the layout, wherein aligning the 3D spatial map and the at least the portion of the layout is based at least in part on at least one of: aligning the first origin with the second origin, or aligning the first reference point with the second reference point.
- 13 . The method of claim 7 , wherein the first instance of the object represents a first pillar extending from a floor of the environment, further comprising: determining a second object within the 3D spatial map, the second object representing a second pillar extending from the floor of the environment; and determining, based at least in part on the map of the environment, a distance extending between the first pillar and the second pillar.
- 14 . The method of claim 7 , wherein determining the second classifier of the object is based at least in part aligning the 3D spatial map and the at least the portion of the layout.
- 15 . The method of claim 7 , wherein determining the discrepancy is based at least in part on first coordinates associated with the first location of the first instance of the object in the 3D spatial map and second coordinates associated with the second location of the second instance of the object in the layout.
Description
BACKGROUND Maps are commonly used to understand relationships between objects in space. For example, within a building, a map may indicate a layout of the building, where objects are located, and their relative spacing between one another. In some instances, humans may utilize maps, or in other instances, robotic devices may navigate according to the map. Creating a map is often a tedious and time consuming process, and may be prone to error. In such instances, humans may find the map difficult to use and/or robotic drives may be unable to traverse about the building given inaccuracies within the map. This may lead to injury, damage of the robotic drives, and/or damage to other objects in the building. BRIEF DESCRIPTION OF THE DRAWINGS The detailed description is described with reference to the accompanying figures. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The use of the same reference numbers in different figures indicates similar or identical components or features. The systems depicted in the accompanying figures are not to scale and components within the figures may be depicted not to scale with each other. FIG. 1 illustrates an example environment that includes a reality capture robot that scans or images an environment, using one or more LiDAR or three dimensional (3D) imaging sensors, to generate a 3D point cloud or 3D spatial map of the environment, according to an embodiment of the present disclosure. A management system compares the 3D point cloud of the environment to a layout of the environment to determine one or more differences therebetween. Therein, a static map of the environment is generated for use by one or more robotic drives when traversing about the environment. FIG. 2 illustrate select components of the reality capture robot, the management system, and the robotic drives of FIG. 1, as well as a device that may be used for controlling the reality capture robot, according to an embodiment of the present disclosure. FIG. 3 illustrates an example scenario of the reality capture robot of FIG. 1 scanning an environment to generate a portion of the 3D point cloud of the environment, according to an embodiment of the present disclosure. FIG. 4 illustrates an example scenario of the reality capture robot of FIG. 1 scanning a portion of the environment along a route, according to an embodiment of the present disclosure. FIG. 5A illustrates an example route of the reality capture robot of FIG. 1 to scan the environment, according to an embodiment of the present disclosure. FIG. 5B illustrates an example route of the robotic drive of FIG. 1 traversing the environment, according to an embodiment of the present disclosure. FIG. 6 illustrates an example of aligning a 3D point cloud of the environment with a layout of the environment, according to an embodiment of the present disclosure. FIG. 7 illustrates an example of aligning a portion of a 3D point cloud of the environment with a portion of a layout of the environment, according to an embodiment of the present disclosure. FIG. 8 illustrate an example static map of the environment, according to an embodiment of the present disclosure. FIG. 9 illustrates an example scenario of using a static map of the environment to determine locations of structures within the environment, according to an embodiment of the present disclosure. FIG. 10 illustrates an example process for generating a static map of the environment, according to an embodiment of the present disclosure. FIG. 11 illustrates an example process for comparing a 3D point cloud of the environment with a layout of the environment to generate a static map of the environment, according to an embodiment of the present disclosure. FIG. 12 illustrates an example process for generating a 3D point cloud of the environment, according to an embodiment of the present disclosure. FIG. 13 illustrates an example process for scanning the environment using the reality capture robot of FIG. 1, according to an embodiment of the present disclosure. DETAILED DESCRIPTION This application is directed, at least in part, to systems and methods that determine a static map of an environment for use by robotic drives and/or scheduling tasks within the environment. In some instances, the environment may represent a facility, warehouse, or other building in which items are packaged, sorted, or otherwise processed for shipment. As part of this, the robotic drives may traverse about the environment for delivering items, restocking items, or otherwise assisting in the processing of items. To enable the robotic drives to move about the environment, the static map of the environment may be generated. In some instances, the static map represents a three-dimensional (3D) representation of the environment that indicates locations of stations, items, structures, and so forth within the environment. The static map may be generated using a reali