EP-4229446-B1 - TECHNIQUES FOR POINT CLOUD FILTERING
Inventors
- TOSHNIWAL, KRISHNA
- REZK, MINA
- HEXSEL, BRUNO
- VISWANATHA, KUMAR BHARGAV
- KRAUSE PERIN, JOSE
- MOORTI, RAJENDRA TUSHAR
- NAKAMURA, James
Dates
- Publication Date
- 20260506
- Application Date
- 20211008
Claims (14)
- A method (700) of filtering points in a light detection and ranging, LiDAR, system, comprising: receiving (701), at a first filter, a set of points of interest, POIs, of a point cloud, wherein each POI of the set of POIs comprises one or more points; filtering (702) each POI of the set of POIs, comprising: selecting (703) a set of neighborhood points of a POI; computing (704) a metric for the set of neighborhood points based on a property of the set of neighborhood points and the POI, wherein the property comprises a velocity; determining (705), based on the metric, whether to accept the POI, modify the POI, reject the POI, or transmit the POI to a second filter to extract at least one of range or velocity information related to the target; provided (706) the POI is accepted or modified, transmitting the POI to a filtered point cloud to extract the at least one of range or velocity information related to the target; provided (707) the POI is rejected at the first filter, preventing the POI from reaching the filtered point cloud; and provided (708) the POI is not accepted, modified, or rejected at the first filter, transmitting the POI to the second filter to determine whether to accept, modify, or reject the POI to extract the at least one of range or velocity information related to the target.
- The method (700) of claim 1, wherein the selecting a set of neighborhood points of a POI comprises selecting a window of data points in a same azimuth or elevation around the POI.
- The method (700) of claim 1, wherein the selecting a set of neighborhood points of a POI comprises selecting a set of points in a 2-D grid neighborhood around the POI; or wherein the selecting a set of neighborhood points of a POI comprises selecting a set of points in a 3-D space neighborhood around the POI; or wherein the selecting a set of neighborhood points of a POI comprises selecting a set of points in a 3-D spatio-temporal neighborhood from previous frames around the POI.
- The method (700) of claim 1, wherein the computing a metric for the set of neighborhood points comprises computing the metric based on a variance of a property of the set of neighborhood points and the POI, wherein the property further comprises an intensity, or a range.
- The method (700) of claim 4, wherein the metric is further computed based on a higher order moment of the property of the set of neighborhood points and the POI, and wherein the higher order moment comprises a skewness or a kurtosis.
- The method (700) of claim 1, wherein the computing a metric for the set of neighborhood points comprises computing the metric based on a confidence according to a similarity of a velocity, an intensity, or a range of the set of neighborhood points and the POI.
- The method (700) of claim 1, wherein the computing a metric for the set of neighborhood points comprises computing the metric based on a threshold of a velocity, an intensity, or a range of the set of neighborhood points and the POI.
- The method (700) of claim 1, wherein the computing a metric for the set of neighborhood points comprises computing the metric based on a variance of up-chirp frequencies or down-chirp frequencies of the set of neighborhood points and the POI.
- A light detection and ranging, LiDAR, system (100), comprising: a processor (112); and a memory (140) to store instructions that, when executed by the processor, cause the system to: receive (701), at a first filter, a set of points of interest, POIs, of a first point cloud, wherein each POI of the set of POIs comprises one or more points; filter (702) each POI of the set of POIs, wherein the system is to: select (703) a set of neighborhood points of a POI; compute (704) a metric for the set of neighborhood points based on a property of the set of neighborhood points and the POI, wherein the property comprises a velocity; determine (705), based on the metric, whether to accept the POI, modify the POI, reject the POI, or transmit the POI to a second filter to extract at least one of range or velocity information related to the target; provided (706) the POI is accepted or modified, transmit the POI to a filtered point cloud to extract the at least one of range or velocity information related to the target; provided (707) the POI is rejected, prevent the POI from reaching the filtered point cloud; and provided (708) the POI is not accepted, modified, or rejected, transmit the POI to the second filter to determine whether to accept, modify, or reject the POI to extract the at least one of range or velocity information related to the target.
- The system (100) of claim 9, wherein the system is to select a window of data points in a same azimuth or elevation around the POI.
- The system (100) of claim 9, wherein the system is to select a set of points in a 2-D grid neighborhood, a 3-D space neighborhood, or a 3-D spatio-temporal neighborhood from previous frames around the POI.
- The system (100) of claim 9, wherein the system is to compute the metric based on a variance of a property of the set of neighborhood points and the POI, wherein the property further comprises an intensity, or a range, or wherein the system is to compute the metric based on a variance of up-chirp frequencies or down-chirp frequencies of the set of neighborhood points and the POI.
- The system (100) of claim 9, wherein the system is to compute the metric based on a confidence according to a similarity of a velocity, an intensity, or a range of the set of neighborhood points and the POI.
- The system (100) of claim 9, wherein the system is to compute the metric based on a threshold of a velocity, an intensity, or a range of the set of neighborhood points and the POI.
Description
TECHNICAL FIELD The present disclosure relates generally to point set or point cloud filtering techniques and, more particularly, point set or point cloud filtering techniques for use in a light detection and ranging (LiDAR) system. BACKGROUND Frequency-Modulated Continuous-Wave (FMCW) LiDAR systems include several possible phase impairments such as laser phase noise, circuitry phase noise, flicker noise that the driving electronics inject on a laser, drift over temperature / weather, and chirp rate offsets. FMCW LiDAR point clouds may exhibit distinct noise patterns, which may arise from incorrect peak matching leading to falsely detected points that appear in the scene even when nothing is present. For example, when an FMCW LiDAR points to a fence or a bush, a number of ghost points may appear in the scene between the LiDAR and the fence. These ghost points or noisy points, which are also classified as False Alarm (FA) points, if left unfiltered, may introduce ghost objects and cause errors in the estimated target range/velocity. EP 3 361 278 A1 discloses a system in which location of an autonomous driving vehicle (ADV) is determined with respect to a high-definition map. On-boards sensors of the ADV obtain a 3D point cloud of objects surrounding the ADV. The 3D point cloud is organized into an ADV feature space of cells. Each cell has a median intensity value and a variance in elevation. A set of candidate cells that surround the ADV is determined. For each candidate, a set of cells of the ADV feature space that surround the candidate cell is projected onto the map feature space using kernel projection, for one or more dimensions. Kernels can be Walsh-Hadamard vectors. Candidates having insufficient similarity are rejected. When a threshold number of non-rejected candidates remain, candidate similarity can be determined using a similarity metric. The coordinates of the most similar candidate cell are used to determine the position of the vehicle with respect to the map. US 2020/256999 A1 discloses intelligent photography with machine learning. The disclosure includes receiving a video stream from a control camera, providing inputs to a trained machine learning model based on the video stream, determining, based on data output by the trained machine learning model in response to the inputs, at least a first time for capturing a first picture during a session, and programmatically instructing a first camera to capture the first picture at the first time during the session. US 2012/294482 A1 discloses an environment recognition device and an environment recognition method. The environment recognition device includes: a position information obtaining unit that obtains position information of a target portion in a detection area, the position information including a relative distance to a subject vehicle; a grouping unit that groups the target portions as a target object based on the position information; a luminance obtaining unit that obtains a luminance of an image of the target object; a luminance distribution generating unit that generates a histogram of the luminance of the image of the target object; and a floating substance determining unit that determines whether or not the target object is a floating substance based on a statistical analysis on the histogram. WO 2018/160240 discloses a method which includes obtaining a first set of one or more ranges based on corresponding frequency differences in a return optical signal compared to a first chirped transmitted optical signal. The first chirped transmitted optical signal includes an up chirp that increases its frequency with time. The method further includes obtaining a second set of one or more ranges based on corresponding frequency differences in a return optical signal compared to a second chirped transmitted optical signal. The second chirped transmitted optical signal includes a down chirp that decreases its frequency with time. The distributions of the detected peak on the up-chirp/down-chirp return is combined in an effort to reduce the variance of the Doppler estimate. SUMMARY The present disclosure describes various examples of point cloud filters in LiDAR systems. The scope of the present invention is defined by the appended claims. It should be appreciated that, although one or more embodiments in the present disclosure depict the use of point clouds, embodiments of the present invention are not limited as such and may include, but are not limited to, the use of point sets and the like. These and other aspects of the present disclosure will be apparent from a reading of the following detailed description together with the accompanying figures, which are briefly described below. The present disclosure includes any combination of two, three, four or more features or elements set forth in this disclosure, regardless of whether such features or elements are expressly combined or otherwise recited in a specific example implementation described