Search

US-12618945-B2 - Motion-based object detection in a vehicle radar using convolutional neural network systems

US12618945B2US 12618945 B2US12618945 B2US 12618945B2US-12618945-B2

Abstract

Examples disclosed herein relate to a radar system in an autonomous vehicle for object detection and classification. The radar system has a radar module having a dynamically controllable beam steering antenna and a perception module. The perception module includes a machine learning module trained on a first set of data and retrained on a second set of data to generate a set of object locations and classifications, and a classifier to use velocity information combined with the set of object locations and classifications to output a set of classified data.

Inventors

  • Matthew Paul Harrison

Assignees

  • BDCM A2 LLC

Dates

Publication Date
20260505
Application Date
20230221

Claims (12)

  1. 1 . A radar comprising: a perception module comprising: a machine learning module trained on a first set of data to generate a list of perceived objects and retrained on a second set of data to generate a set of object locations and classifications, wherein the second set of data is less than the first set of data and the second set of data corresponds to data obtained by a different type of sensor than the first set of data, wherein the second set of data comprises velocity information comprising a set of velocity vectors corresponding to respective objects in the list of perceived objects; and a classifier to use the velocity information combined with the set of object locations and classifications to output a set of classified data, wherein the set of classified data comprises detected objects and respective classifications; and a radar transmission module, wherein the radar transmission module is controlled by the perception module based on the set of classified data.
  2. 2 . The radar of claim 1 , further comprising a motion-based object detection means.
  3. 3 . The radar of claim 2 , further comprising a convolutional neural network (CNN) trained on lidar data to identify objects from radar data.
  4. 4 . The radar of claim 3 , wherein the CNN is retrained on the radar data.
  5. 5 . The radar of claim 4 , wherein the radar data is four dimensional data including range, velocity, azimuthal angle and elevation angle of radar beams radiated off the objects.
  6. 6 . The radar of claim 1 , further comprising a camera sensor to detect visible objects and conditions.
  7. 7 . The radar of claim 1 , adapted to provide a 360 degree view of a vehicle.
  8. 8 . The radar of claim 7 , further comprising a meta-structure based antenna.
  9. 9 . The radar of claim 8 , adapted to receive control from a sensor fusion module.
  10. 10 . The radar of claim 9 , adapted to communicate with the sensor fusion module.
  11. 11 . The radar of claim 10 , adapted to identify multiple radar signals that may interfere with the radar.
  12. 12 . The radar of claim 11 , wherein the sensor fusion module provides feedback information to the perception module.

Description

CROSS-REFERENCE TO RELATED APPLICATIONS This application claims priority from U.S. Non-Provisional application Ser. No. 16/781,152, titled “Motion-Based Object Detection in a Radar Using Convolutional Neural Network Systems,” filed on Feb. 4, 2020, and incorporated herein by reference in its entirety; which claims priority from U.S. Provisional Application No. 62/801,056, titled “Motion-Based Object Detection in a Radar Using Convolutional Neural Network Systems,” filed on Feb. 4, 2019, and incorporated herein by reference in its entirety. BACKGROUND Autonomous driving is quickly moving from the realm of science fiction to becoming an achievable reality. Already in the market are Advanced-Driver Assistance Systems (“ADAS”) that automate, adapt and enhance vehicles for safety and better driving. The next step will be vehicles that increasingly assume control of driving functions such as steering, accelerating, braking and monitoring the surrounding environment and driving conditions to respond to events, such as changing lanes or speed when needed to avoid traffic, crossing pedestrians, animals, and so on. The requirements for object and image detection are critical and specify the time required to capture data, process it and turn it into action. All this while ensuring accuracy, consistency and cost optimization. An aspect of making this work is the ability to detect and classify objects in the surrounding environment at the same or possibly at an even better level than humans. Humans are adept at recognizing and perceiving the world around them with an extremely complex human visual system that essentially has two main functional parts: the eye and the brain. In autonomous driving technologies, the eye may include a combination of multiple sensors, such as camera, radar, and lidar, while the brain may involve multiple artificial intelligence, machine learning and deep learning systems. The goal is to have full understanding of a dynamic, fast-moving environment in real time and human-like intelligence to act in response to changes in the environment. BRIEF DESCRIPTION OF THE DRAWINGS The present application may be more fully appreciated in connection with the following detailed description taken in conjunction with the accompanying drawings, which are not drawn to scale and in which like reference characters refer to like parts throughout, and wherein: FIG. 1 illustrates an example environment in which a radar in an autonomous vehicle is used to detect and identify objects, according to various implementations of the subject technology; FIG. 2 is a schematic diagram of an autonomous driving system for an autonomous vehicle according to various implementations of the subject technology; FIG. 3 is a schematic diagram of a radar as in FIG. 2 and according to various implementations of the subject technology; FIG. 4 is a schematic diagram for training the machine learning module (“MLM”) as in FIG. 3 according to various implementations of the subject technology; FIG. 5 is a flowchart for training an MLM implemented as in FIG. 4 and according to various implementations of the subject technology; FIG. 6 illustrates the first training data sets for training the MLM according to various implementations of the subject technology; FIG. 7 is a schematic diagram illustrating the training performed by the MLM on lidar data according to various implementations of the subject technology; FIG. 8 illustrates the second training data sets for training the MLM according to various implementations of the subject technology; FIG. 9 is a schematic diagram illustrating the training performed by the MLM on radar data according to various implementations of the subject technology; FIG. 10 shows the combination of occupancy data with extracted velocity information to generate micro-doppler information according to various implementations of the subject technology; FIG. 11 is a schematic diagram illustrating the training of a MLM and a classifier on radar data according to various implementations of the subject technology; FIG. 12 is a flowchart for operation of a radar to detect and identify objects according to various implementations of the subject technology; FIG. 13 illustrates the training data pairs for a motion-based training of the MLM according to various implementations of the subject technology; FIG. 14 illustrates the motion-based training of the LML on the radar data according to various implementations of the subject technology; and FIG. 15 illustrates the operation of the MLM to detect objects while distinguishing stationary and moving objects according to various implementations of the subject technology. DETAILED DESCRIPTION Methods and apparatuses for motion-based object detection in a vehicle radar using convolutional neural network systems are disclosed. The methods and apparatuses include the acquisition of raw data from a radar in an autonomous vehicle and the processing of that data through a perception modu