Search

US-12625506-B1 - Procedures for docking with stations by aerial vehicles

US12625506B1US 12625506 B1US12625506 B1US 12625506B1US-12625506-B1

Abstract

A procedure for docking an aerial vehicle at a docking station includes multiple phases. In a first phase, the aerial vehicle begins a descent at a position over a prior position of the docking station, and monitors a pose using navigation sensors and images captured from below the aerial vehicle. The docking station includes visual markings in a pattern. The aerial vehicle detects the markings within the images, and determines poses based on the pattern. When the poses converge, the docking evolution proceeds to a second phase, during which the aerial vehicle determines poses using images and inertial sensors. When the aerial vehicle reaches an altitude below which the markings do not appear within the field of view of the camera, the docking evolution proceeds to a third phase, during which the aerial vehicle determines poses using images, inertial sensors and range data, until the aerial vehicle is docked.

Inventors

  • Hsiao-Chieh Yen

Assignees

  • AMAZON TECHNOLOGIES, INC.

Dates

Publication Date
20260512
Application Date
20231215

Claims (20)

  1. 1 . A method for executing a docking evolution at a docking station by an aerial vehicle, wherein the aerial vehicle comprises: a plurality of propulsion motors, wherein each of the plurality of propulsion motors is coupled to a propeller and configured to rotate the propeller at one or more selected speeds; a camera, wherein the camera has a field of view extending normal to a surface on an underside of the aerial vehicle; a navigation sensor; an inertial measurement unit comprising at least one gyroscope, at least one accelerometer and at least one magnetometer; a range sensor, wherein the range sensor is configured to emit light below the aerial vehicle and capture reflections of the emitted light; and one or more computer processors, wherein the docking station comprises a frame having a plurality of visual markings arranged in an asymmetric pattern on an upper surface of the frame, and wherein the method comprises: operating, by the aerial vehicle, one or more of the propulsion motors to cause the aerial vehicle to travel to a first position associated with the docking evolution; initiating, by the aerial vehicle, a first phase of the docking evolution with the aerial vehicle within a vicinity of the first position, wherein initiating the first phase of the docking evolution comprises causing the aerial vehicle to descend in altitude; during the first phase, capturing, by the camera, a first plurality of images; detecting, by the aerial vehicle, the plurality of visual markings depicted within at least one of the first plurality of images; calculating, by the aerial vehicle, at least a first pose of the aerial vehicle based at least in part on the plurality of visual markings depicted within the at least one of the first plurality of images; capturing data by the navigation sensor; calculating, by the aerial vehicle, at least a second pose of the aerial vehicle based at least in part on the data captured by the navigation sensor; and prior to arriving at a second position, determining that the first pose is consistent with the second pose; and initiating, by the aerial vehicle, a second phase of the docking evolution with the aerial vehicle within a vicinity of the second position, wherein initiating the second phase of the docking evolution comprises causing the aerial vehicle to descend in altitude; during the second phase, capturing, by the camera, a second plurality of images; detecting, by the aerial vehicle, the plurality of visual markings depicted within at least one of the second plurality of images; capturing first data by the inertial measurement unit; calculating, by the aerial vehicle, at least a third pose of the aerial vehicle based at least in part on the plurality of visual markings depicted within the at least one of the second plurality of images and the first data; and prior to arriving at a third position, determining that the third pose is consistent with the second pose; and initiating, by the aerial vehicle, a third phase of the docking evolution with the aerial vehicle within a vicinity of the third position, wherein initiating the third phase of the docking evolution comprises causing the aerial vehicle to descend in altitude; and during the third phase, capturing, by the camera, a third plurality of images; detecting, by the aerial vehicle, at least one of the plurality of visual markings depicted within at least one of the third plurality of images; capturing second data by the inertial measurement unit; determining, by the range sensor, ranges to at least a portion of the docking station below the aerial vehicle; calculating, by the aerial vehicle, at least a fourth pose of the aerial vehicle based at least in part on the plurality of visual markings depicted within the at least one of the third plurality of images and the second data.
  2. 2 . The method of claim 1 , wherein detecting the plurality of visual markings depicted within at least one of the first plurality of images comprises: identifying portions of the at least one of the first plurality of images corresponding to one of the plurality of visual markings by one of binarization, thresholding or segmentation; projecting the portions of the at least one of the first plurality of images onto a horizontal plane corresponding to a fourth position; and determining that sizes and positions of the portions of the at least one of the first plurality of images projected onto the horizontal plane corresponding to the fourth position are consistent with the asymmetric pattern.
  3. 3 . The method of claim 1 , wherein each of the plurality of visual markings is one of a light-emitting diode, a reflector, a symbol or a character arranged in the asymmetric pattern on the upper surface of the frame within a depression.
  4. 4 . The method of claim 1 , wherein the frame of the docking station is disposed within a housing having a flat base and walls extending normal to the base, wherein the walls define an upper rim of the housing, wherein the frame defines a depression having an edge with a size and a shape corresponding to the upper rim of the housing, a bottom section, and angled edge sections descending from the edge to the bottom section, wherein the bottom section has a substantially square shape, and wherein the frame further comprises a plurality of slit openings on each of the angled edge sections and the bottom section.
  5. 5 . A method comprising: causing an aerial vehicle to descend below a first position; capturing first data by at least one navigation sensor of the aerial vehicle; determining, by the aerial vehicle, a first pose of the aerial vehicle based at least in part on the first data; capturing at least a first image by at least a first camera of the aerial vehicle, wherein the first camera has a field of view that extends below the aerial vehicle; detecting a plurality of markings depicted within the first image, wherein the markings are provided in an asymmetric pattern on at least one surface of a docking station; calculating a second pose of the aerial vehicle based at least in part on the plurality of markings depicted within the first image; determining that the first pose is consistent with the second pose, wherein that the first pose is consistent with the second pose is determined with the aerial vehicle above a second position; in response to determining that the first pose is consistent with the second pose with the aerial vehicle above the second position, causing the aerial vehicle to descend below the second position; capturing second data by at least one inertial sensor of the aerial vehicle; capturing at least a second image by at least the first camera; detecting the plurality of markings depicted within the second image; calculating a third pose of the aerial vehicle based at least in part on the second data and the plurality of markings depicted within the second image, wherein the third pose is calculated with the aerial vehicle above a third position, and wherein the third position is above the docking station; and causing the aerial vehicle to land on the docking station based at least in part on the third pose.
  6. 6 . The method of claim 5 , comprising: prior to causing the aerial vehicle to descend below the first position, causing the aerial vehicle to take off from the docking station in a fourth position, wherein the method further comprises: selecting the first position based at least in part on the fourth position.
  7. 7 . The method of claim 6 , wherein selecting the first position comprises: selecting a vertical offset above the fourth position based at least in part on at least an attribute of the at least one navigation sensor or at least an attribute of the first camera, wherein the first position is provided at an altitude corresponding to the vertical offset above the fourth position.
  8. 8 . The method of claim 6 , further comprising: determining a fifth position corresponding to at least a portion of an obstruction within a vicinity of the docking station, wherein the first position is provided at not less than a predetermined distance from the fifth position.
  9. 9 . The method of claim 6 , wherein detecting the plurality of markings depicted within the first image comprises: identifying portions of the first image corresponding to one of the plurality of markings by one of binarization, thresholding or segmentation; projecting the portions of the first image onto a horizontal plane corresponding to the fourth position; and determining that sizes and positions of the portions of the first image projected onto the horizontal plane corresponding to the fourth position are consistent with the asymmetric pattern.
  10. 10 . The method of claim 5 , wherein the docking station comprises: a housing; and a frame disposed within the housing, wherein the frame defines a depression having an edge with a size and a shape corresponding to an upper rim of the housing, a bottom section, and angled edge sections descending from the edge to the bottom section, wherein the bottom section has a substantially square shape, and wherein the plurality of markings are arranged in the asymmetric pattern on an upper surface of the frame within the depression.
  11. 11 . The method of claim 10 , wherein each of the plurality of markings is one of a light-emitting diode, a reflector, a symbol or a character arranged in the asymmetric pattern on the upper surface of the frame within the depression.
  12. 12 . The method of claim 5 , wherein determining that the first pose is consistent with the second pose comprises: calculating a distance between at least a portion of the first pose and at least a portion of the second pose; and determining that the distance is less than a predetermined threshold.
  13. 13 . The method of claim 5 , comprising: capturing at least a third image by at least the first camera of the aerial vehicle, wherein the third image is captured with the aerial vehicle at a fourth position, and wherein the fourth position is below the first position and above the second position; detecting the plurality of markings depicted within the third image; calculating a fourth pose of the aerial vehicle based at least in part on the plurality of markings depicted within the third image; and causing the aerial vehicle to hover above the second position, wherein the first image is captured with the aerial vehicle hovering above the second position.
  14. 14 . The method of claim 5 , wherein determining that the first pose is consistent with the second pose comprises: determining that first data corresponding to the first pose is consistent with second data corresponding to the second pose, wherein the first data comprises a first set of coordinates of the aerial vehicle and a first yaw angle of the aerial vehicle, and wherein the second data comprises a second set of coordinates of the aerial vehicle and a second yaw angle of the aerial vehicle.
  15. 15 . The method of claim 5 , wherein causing the aerial vehicle to land on the docking station based at least in part on the third pose comprises: determining an orientation of the docking station based at least in part on the plurality of markings depicted within the second image; selecting an angle based at least in part on the orientation of the docking station and the third pose; and causing the aerial vehicle to rotate about a yaw axis of the aerial vehicle by the angle.
  16. 16 . The method of claim 5 , wherein the aerial vehicle comprises an inertial measurement unit comprising at least one gyroscope, at least one accelerometer and at least one magnetometer, and wherein the at least one inertial sensor comprises at least one of the at least one gyroscope, the at least one accelerometer or the at least one magnetometer.
  17. 17 . The method of claim 5 , wherein the at least one navigation sensor is a time-of-flight sensor oriented to emit light below the aerial vehicle and to capture reflections of the emitted light off one or more surfaces.
  18. 18 . The method of claim 5 , wherein the first camera is one of a visual camera or an infrared camera.
  19. 19 . The method of claim 5 , wherein the at least one navigation sensor comprises at least one of a light detection and ranging sensor, a time-of-flight sensor, an inertial measurement unit or a second camera.
  20. 20 . An aerial vehicle comprising: a plurality of propulsion motors, wherein each of the plurality of propulsion motors is coupled to a propeller and configured to rotate the propeller at one or more selected speeds; a first camera, wherein the first camera has a field of view extending normal to a surface on an underside of the aerial vehicle; a navigation sensor; an inertial measurement unit comprising at least one gyroscope, at least one accelerometer and at least one magnetometer; a range sensor, wherein the range sensor is configured to emit light below the aerial vehicle and capture reflections of the emitted light; at least one memory component having one or more sets of instructions stored thereon; and one or more computer processors, wherein the one or more sets of instructions, when executed by the aerial vehicle, cause the aerial vehicle to at least: operate the one or more propulsion motors to cause the aerial vehicle to travel to a first position associated with a docking evolution; initiate a first phase of the docking evolution with the aerial vehicle within a vicinity of the first position; during the first phase, operate the one or more propulsion motors to cause the aerial vehicle to descend in altitude below the first position; capture a first plurality of images by the first camera; detect a plurality of visual markings depicted within at least one of the first plurality of images; calculate at least a first pose of the aerial vehicle based at least in part on the plurality of visual markings depicted within the at least one of the first plurality of images; capture first data by the navigation sensor; calculate at least a second pose of the aerial vehicle based at least in part on the first data; and prior to descending below a second position, determine that the first pose is consistent with the second pose; and initiate a second phase of the docking evolution; during the second phase, capture a second plurality of images by the first camera; detect the plurality of visual markings depicted within at least one of the second plurality of images; capture second data by the inertial measurement unit; calculate at least a third pose of the aerial vehicle based at least in part on the plurality of visual markings depicted within the at least one of the second plurality of images and the second data; and prior to descending below a third position, determine that the third pose is consistent with the second pose; and initiate a third phase of the docking evolution; and during the third phase, capture a third plurality of images; detect at least one of the plurality of visual markings depicted within at least one of the third plurality of images; capture third data by the inertial measurement unit; determine ranges to at least a portion of a docking station below the aerial vehicle by the range sensor; and calculate at least a fourth pose of the aerial vehicle based at least in part on the plurality of visual markings depicted within the at least one of the third plurality of images and the third data.

Description

BACKGROUND Aerial vehicles are most commonly operated in outdoor spaces. When an aerial vehicle operates in an outdoor space, the aerial vehicle may take off from a fixed or mobile location, e.g., a runway, a landing pad, or any like facility or station, by causing motors to generate lift and elevate the aerial vehicle to a selected altitude or position. The aerial vehicle may then travel on any selected courses, speeds or altitudes. Prior to taking off, or while in flight, an aerial vehicle operating outdoors may determine its position in three-dimensional space using a position sensor, e.g., a Global Positioning System (“GPS”) receiver that captures signals from one or more satellites or other sources, as well as an inertial measurement unit (or “IMU”), one or more altimeters, barometers, or other components. An aerial vehicle may rely on such sensors to travel to a specific location, which may be the same location from which the aerial vehicle took off, or a different location, before completing a landing evolution. Operating an aerial vehicle, or drone, within indoor spaces presents a unique set of challenges for the aerial vehicle, and creates unique risks for occupants or contents of the indoor spaces. In particular, whereas aerial vehicles that operate outdoors may commonly utilize large, open areas to maneuver during takeoff and landing evolutions, an aerial vehicle that operates indoors, which are often constrained by narrow hallways or other passageways, and feature limited operating areas between floors and ceilings, must usually maneuver and execute takeoff or landing evolutions with precision. BRIEF DESCRIPTION OF THE DRAWINGS FIGS. 1A through 1L are views of aspects of one system in accordance with embodiments of the present disclosure. FIG. 2 is a block diagram of one system including an aerial vehicle in accordance with embodiments of the present disclosure. FIGS. 3A and 3B are views of aspects of one aerial vehicle in accordance with embodiments of the present disclosure. FIGS. 4A and 4B are a flow chart of one process to be performed by an aerial vehicle in accordance with embodiments of the present disclosure. FIG. 5 is a view of aspects of one system in accordance with implementations of the present disclosure. DETAILED DESCRIPTION As is set forth in greater detail below, the present disclosure is directed to landing procedures for aerial vehicles (e.g., drones) that are configured for operation within indoor spaces. More specifically, the present disclosure is directed to processes or techniques for causing an aerial vehicle to land on, or dock on, a docking station following operations within indoor spaces. The processes or techniques include multiple phases during which an aerial vehicle may rely on data captured or generated by multiple sensors to aid the aerial vehicle in safe operations at various altitudes prior to landing. In some implementations, the aerial vehicle may travel to an initial position that is located at a selected altitude over a last known position of a docking station from which the aerial vehicle departed, or onto which the aerial vehicle intends to land. The aerial vehicle may travel to the initial position using one or more navigation sensors, e.g., LIDAR sensors, time-of-flight sensors, imaging devices, or others, which may capture data and interpret the data to cause the aerial vehicle to travel along or in a manner consistent with navigation maps that were generated during prior operations of the aerial vehicle or at any other time. In a first phase, the aerial vehicle begins a descent from the initial position toward a transition position, and determines poses (e.g., positions and orientations) during the descent using not only navigation sensors but also imaging data captured using downward-oriented cameras of the aerial vehicle. For example, where the aerial vehicle is programmed with a navigation map, and captures data regarding an environment using the one or more navigation sensors, the aerial vehicle may determine a position and an orientation with respect to the navigation map, or another reference frame, based on the captured data and the navigation map. Likewise, where the docking station includes a plurality of visual markings arranged in a predetermined pattern, the aerial vehicle may be programmed with information or data regarding the visual markings, including their appearance and locations with respect to one another within the predetermined pattern. When the aerial vehicle captures images using a downward-oriented camera during a descent, the aerial vehicle may determine a position and an orientation with respect to the docking station by detecting the visual markings within the images. The aerial vehicle may then determine a pose of the docking station with respect to the navigation map based on the pose of the aerial vehicle with respect to the navigation map, as determined using the navigation sensors, and the pose of the aerial vehicle