US-12620115-B2 - Surface profile estimation and bump detection for autonomous machine applications
Abstract
In various examples, surface profile estimation and bump detection may be performed based on a three-dimensional (3D) point cloud. The 3D point cloud may be filtered in view of a portion of an environment including drivable free-space, and within a threshold height to factor out other objects or obstacles other than a driving surface and protuberances thereon. The 3D point cloud may be analyzed—e.g., using a sliding window of bounding shapes along a longitudinal or other heading direction—to determine one-dimensional (1D) signal profiles corresponding to heights along the driving surface. The profile itself may be used by a vehicle—e.g., an autonomous or semi-autonomous vehicle—to help in navigating the environment, and/or the profile may be used to detect bumps, humps, and/or other protuberances along the driving surface, in addition to a location, orientation, and geometry thereof.
Inventors
- Minwoo Park
- Yue Wu
- Michael Grabner
- Cheng-Chieh Yang
Assignees
- NVIDIA CORPORATION
Dates
- Publication Date
- 20260505
- Application Date
- 20231108
Claims (20)
- 1 . A method comprising: obtaining, based at least on sensor data obtained using one or more sensors of a machine, a point cloud associated with an environment; determining, using sliding windows that at least partially overlap and based at least on the point cloud, heights associated with a surface within the environment; determining, based at least on the heights, one or more locations of at least one of a protuberance or a cavity within the environment; and causing the machine to perform one or more operations based at least on the one or more locations of the at least one of the protuberance or the cavity.
- 2 . The method of claim 1 , further comprising: determining, using the sliding windows, at least a first location along the surface and a second location along the surface, the second location being separated from the first location by a distance associated with the sliding windows, wherein: the heights include at least a first height associated with the first location and a second height associated with the second location; and the one or more locations include at least one of the first location or the second location.
- 3 . The method of claim 1 , wherein the determining the one or more locations of the at least one of the protuberance or the cavity comprises: determining that the heights are equal to or greater than a threshold height; and determining, based at least on the heights being equal to or greater than the threshold height, the one or more locations of the at least one of the protuberance or the cavity within the environment.
- 4 . The method of claim 1 , further comprising applying the sliding windows along the surface and in a direction of travel associated with the machine.
- 5 . The method of claim 1 , wherein: the determining the one or more locations of the at least one of the protuberance or the cavity comprises determining a plurality of locations of the at least one of the protuberance or the cavity within the environment; the method further comprises determining, based at least on the plurality of locations, at least one of a profile or an orientation of the at least one of the protuberance or the cavity; and the causing the machine to perform the one or more operations is based at least on the at least one of the profile or the orientation of the at least one of the protuberance or the cavity.
- 6 . The method of claim 1 , further comprising: determining, based at least on the one or more locations of the at least one of the protuberance or the cavity, a surface profile associated with the surface, wherein the causing the machine to perform the one or more operations is based at least on the surface profile.
- 7 . The method of claim 1 , wherein a first portion of the heights are associated with a first lane of the surface and determined using a first sliding window of the sliding windows; and a second portion of the heights are associated with a second lane of the surface and determined using a second sliding window of the sliding windows, the second sliding window at least partially overlapping with the first sliding window.
- 8 . The method of claim 1 , further comprising: determining that a portion of the point cloud is associated with the surface within the environment, wherein the determining the heights associated with the surface within the environment is based at least on the portion of the point cloud.
- 9 . A system comprising: one or more processors to: obtain, based at least on sensor data obtained using one or more sensors of a machine, a point cloud associated with an environment; determine, using a sliding window and based at least on the point cloud, a first set of points from the point cloud and a second set of points from the point cloud, the second set of points including at least one or more points from the first set of points; determine, based at least on the first set of points and the second set of points, heights associated with a surface within the environment; determine, based at least on the heights, a surface profile associated with the surface; and cause the machine to perform one or more operations based at least on the surface profile.
- 10 . The system of claim 9 , wherein the one or more processors are further to: determine a first location, the first set of points being associated with the first location; and determine a second location based at least on separating the second location from the first location by a distance that is associated with the sliding window, the second set of points being associated with the second location.
- 11 . The system of claim 9 , wherein the one or more processors are further to: determine, based at least on the heights, one or more locations that are associated with one or more surface deviations, wherein the surface profile indicates at least the one or more locations associated with the one or more surface deviations.
- 12 . The system of claim 11 , wherein the determination of the one or more locations that are associated with one or more surface deviations comprises: determining that the heights are equal to or greater than a threshold height; and determining, based at least on the heights being equal to or greater than the threshold height, the one or more locations that are associated with one or more surface deviations.
- 13 . The system of claim 9 , wherein the one or more processors are further to: determine, based at least on the heights, at least one of a profile or an orientation associated with a surface deviation associated with the surface, wherein the surface profile indicates the at least one of the profile or the orientation associated with the surface deviation.
- 14 . The system of claim 9 , wherein the one or more processors are further to apply the sliding window along the surface and in a direction of travel associated with the machine in order to determine the first set of points and the second set of points.
- 15 . The system of claim 9 , wherein the one or more processors are further to: determine that a portion of the point cloud is associated with the surface within the environment, wherein the first set of points and the second set of points are determined based at least on the portion of the point cloud.
- 16 . The system of claim 9 , wherein the system is implemented within at least one of: a control system for an autonomous or semi-autonomous machine; a perception system for an autonomous or semi-autonomous machine; a system for performing simulation operations; a system for performing deep learning operations; a system implemented using an edge device; a system incorporating one or more virtual machines (VMs); a system implemented using a robot; a system implemented at least partially in a data center; or a system implemented at least partially using cloud computing resources.
- 17 . One or more processors comprising processing circuitry to: obtain, based at least on sensor data obtained using one or more depth sensors of a machine, depth information associated with an environment; determine, based at least on applying a sliding window, one or more locations within the environment; determine, based at least on the depth information, one or more one-dimensional (1D) signals associated with the one or more locations and indicating one or more heights associated with a surface within the environment; generate, based at least on the one or more 1D signals, a surface profile associated with the surface; and cause the machine to perform one or more operations based at least on the surface profile.
- 18 . The one or more processors of claim 17 , wherein the surface profile indicates one or more surface deviations based at least on the one or more 1D signals, the one or more surface deviations comprising at least one of one or more of one or more protuberances, one or more bumps, one or more humps, one or more cavities, one or more dips, or one or more holes.
- 19 . The one or more processors of claim 17 , wherein the processing circuitry is further to: determine, based at least on the one or more 1D signals, that the one or more locations are associated with one or more surface deviations within the environment, wherein the surface profile is generated to indicate the one or more surface deviations.
- 20 . The one or more processors of claim 17 , wherein the one or more processors are implemented within at least one of: a control system for an autonomous or semi-autonomous machine; a perception system for an autonomous or semi-autonomous machine; a system for performing simulation operations; a system for performing deep learning operations; a system implemented using an edge device; a system incorporating one or more virtual machines (VMs); a system implemented using a robot; a system implemented at least partially in a data center; or a system implemented at least partially using cloud computing resources.
Description
CROSS-REFERENCE TO RELATED APPLICATIONS This application is a continuation of U.S. patent application Ser. No. 18/174,770, filed Feb. 27, 2023, which is a continuation of U.S. patent application Ser. No. 17/103,680, filed Nov. 24, 2020, which claims the benefit of U.S. Provisional Application No. 62/946,689, filed on Dec. 11, 2019. Each of which is hereby incorporated by reference in its entirety. BACKGROUND When navigating an environment, vehicles—such as autonomous vehicles, semi-autonomous vehicles, non-autonomous vehicles, and the like—may encounter perturbations (such as bumps, humps, or other protuberances, or dips, holes, or other cavities) along the surface of movement that, if not accounted for, may result in an uncomfortable experience for passengers and may cause wear and tear on the vehicle itself. For example, if the vehicle does not decelerate or make suspension adjustments prior to traversing a protuberance in the driving surface, excess force(s) may be put on components of the vehicles—such as the chassis, suspension, axles, treads, joints, and/or the like. As a result, early detection of bumps, dips, or other perturbations in the driving surface may help to smooth the ride for the passengers, as well as increase the longevity of the vehicles and their components. Some conventional approaches to bump detection rely on prior detections of protuberances and correlating these prior detections with location information—e.g., with a map, such as a high-definition (HD) map. However, relying on prior detections presents a number of challenges: a perturbation may be new or otherwise may not have been previously accounted for; the perturbation must be actually detected, meaning that unsafe conditions may be present in unmapped locations; and location information accuracy issues may result in previously detected perturbations being inaccurately identified, resulting in perturbations being accounted for too early or too late. Other conventional approaches rely on deep neural networks (DNNs) trained to predict occurrences of and locations of bumps in driving surfaces. However, DNNs not only require an immense amount of relevant training data to converge to an acceptable accuracy, but also require proper labeling and ground truth information to do so. In addition, a DNN must heavily rely on the data—e.g., images—that are being processed to determine occurrences and locations of bumps in the environment. As a result, and because perturbations in a driving surface often blend into the driving surface or are otherwise difficult to distinguish visually (e.g., speed bumps may be a same or similar color to the surrounding driving surface, the curvature of a perturbation may be difficult to ascertain using a single two-dimensional (2D) image applied to a DNN due to a lack of context or contrast, etc.), training a DNN to accurately and continually predict occurrences and locations of perturbations—in addition to the geometry and orientation thereof—is a difficult task. For example, because the geometry and the orientation of a perturbation may be difficult to predict using an image-based DNN, determinations of the necessary adjustments (e.g., slowing down, loosening suspension, etc.) to the vehicle to account for the perturbation may prove challenging, resulting in similar issues to those bump detection is tasked to mitigate—e.g., an uncomfortable experience for passengers and damage to the vehicle and its components. SUMMARY Embodiments of the present disclosure relate to surface profile estimation and bump detection for autonomous machine applications. Systems and methods are disclosed that analyze a three-dimensional (3D) point cloud to determine a surface profile, as well as to determine an occurrence, location, orientation, and/or geometry of perturbations along the surface within the environment. For example, by accurately identifying, locating, and determining a geometry of bumps, humps, dips, holes, and/or other perturbations of a surface, a vehicle—such as an autonomous or semi-autonomous vehicle (e.g., employing one or more advanced driver assistance systems (ADAS)—may account for the perturbations by slowing down and/or adjusting parameters of the suspension. In addition, using a determined surface profile, the vehicle may more safely and cautiously navigate through the environment by accounting for gaps in visual completeness—e.g., where a road curves sharply from an incline to a decline creating a blind region beyond the apex, a determination may be made to slow the vehicle down until the visual surface beyond the apex becomes visible. In contrast to conventional systems, such as those described above, the systems and methods of the present disclosure generate a 3D point cloud using sensor data from one or more sensors of a vehicle. For example, sequences of images from an image sensor—e.g., of a monocular camera—may be analyzed using structure from motion (SfM) techniques to generate the 3D point c