Search

CN-121989982-A - Method for controlling the operation of an autonomous vehicle and electronic device

CN121989982ACN 121989982 ACN121989982 ACN 121989982ACN-121989982-A

Abstract

Methods and electronic devices for controlling operation of an autonomous vehicle SDC are disclosed. The method includes receiving sensor data, generating a map of an environment using the sensor data, and generating a grid structure having a plurality of cells corresponding to respective portions of the map. A given cell is associated with a probability value that indicates a probability that an object is present in the respective portion of the map. The method includes generating a boundary shape that covers the given cell in response to the probability value being above a detection threshold. The method includes determining that there is potentially an undetected object between the detection threshold and a second threshold in response to the probability value. The method includes triggering the SDC to perform a remedial action in response to the determination that the undetected object is potentially present.

Inventors

  • A. Solovyev

Assignees

  • Y.E.中心亚美尼亚有限责任公司

Dates

Publication Date
20260508
Application Date
20251028
Priority Date
20241101

Claims (20)

  1. 1. A method of controlling operation of an autonomous car SDC, the method comprising: Receiving sensor data regarding the environment of the SDC; generating a map of the environment using the sensor data using a neural network NN; Generating a grid structure having a plurality of cells corresponding to respective portions of the map using the NN, A given cell from the plurality of cells is associated with a probability value indicating a probability that an object is present in the respective portion of the map; responsive to the probability value being above a detection threshold: Generating a boundary shape covering the given cell, the boundary shape indicating that a detected object is present in the respective portion of the map; Responsive to the probability value being between the detection threshold and a second threshold, the second threshold being lower than the detection threshold: determining that an undetected object is potentially present in the respective portion of the map, and Responsive to the determining that the undetected object is potentially present in the respective portion of the map: Triggering the SDC to execute the remedial measure.
  2. 2. The method of claim 1, wherein the sensor data comprises first sensor data from a first sensor and second sensor data from a second sensor.
  3. 3. The method of claim 2, wherein the first sensor data is a point cloud and the first sensor is a LIDAR sensor.
  4. 4. The method of claim 2, wherein the method further comprises generating fused sensor data by combining the first sensor data and the second sensor data, and wherein the generating the map of the environment comprises generating the map of the environment using the fused sensor data.
  5. 5. The method of claim 1, wherein the map of the environment is a bird's eye view BEV map of the environment.
  6. 6. The method of claim 1, wherein the boundary shape is a bounding box.
  7. 7. The method of claim 1, wherein the remedial action is to reduce the speed of the SDC.
  8. 8. The method of claim 1, wherein the triggering the SDC to perform the remedial action is performed independent of one or more path planning operations.
  9. 9. The method of claim 1, wherein the generating the boundary shape comprises performing a non-maximum suppression NMS algorithm on a plurality of candidate boundary shapes.
  10. 10. A method of controlling operation of an autonomous car SDC, the method comprising: Receiving sensor data regarding the environment of the SDC; generating a map of the environment using the sensor data using a neural network; Generating a grid structure having a plurality of cells corresponding to respective portions of the map using the NN, The plurality of cells being associated with respective probability values indicating a probability that an object is present in the respective portion of the map; performing a two-stage object detection process on the grid structure, comprising: during the first phase: generating a boundary shape covering a first cell of the plurality of cells based on a first probability value of the first cell, the boundary shape indicating that a detected object is present in a first portion of the map corresponding to the first cell, the first cell being a bounded cell; during the second phase: Determining that an undetected object is potentially present in a second portion of the map corresponding to an unbounded element based on a second probability value of the unbounded element; Control of the SDC is triggered based on the presence of the detected object in the first portion and the potential presence of the undetected object in the second portion.
  11. 11. An electronic device for controlling operation of an autonomous car SDC, the electronic device configured to: Receiving sensor data regarding the environment of the SDC; generating a map of the environment using the sensor data using a neural network NN; Generating a grid structure having a plurality of cells corresponding to respective portions of the map using the NN, A given cell from the plurality of cells is associated with a probability value indicating a probability that an object is present in the respective portion of the map; responsive to the probability value being above a detection threshold: Generating a boundary shape covering the given cell, the boundary shape indicating that a detected object is present in the respective portion of the map; Responsive to the probability value being between the detection threshold and a second threshold, the second threshold being lower than the detection threshold: determining that an undetected object is potentially present in the respective portion of the map, and In response to determining that the undetected object is potentially present in the respective portion of the map: Triggering the SDC to execute the remedial measure.
  12. 12. The electronic device of claim 11, wherein the sensor data comprises first sensor data from a first sensor and second sensor data from a second sensor.
  13. 13. The electronic device of claim 12, wherein the first sensor data is a point cloud and the first sensor is a LIDAR sensor.
  14. 14. The electronic device of claim 12, wherein the electronic device is further configured to generate fused sensor data by combining the first sensor data and the second sensor data, and wherein generating the map of the environment comprises the electronic device being configured to generate the map of the environment using the fused sensor data.
  15. 15. The electronic device of claim 11, wherein the map of the environment is a bird's eye view BEV map of the environment.
  16. 16. The electronic device of claim 11, wherein the boundary shape is a bounding box.
  17. 17. The electronic device of claim 11, wherein the remedial action is to reduce a speed of the SDC.
  18. 18. The electronic device of claim 11, wherein triggering the SDC to perform the remedial action comprises the electronic device performing the remedial action independent of one or more path planning operations.
  19. 19. The electronic device of claim 11, wherein generating the boundary shape comprises the electronic device configured to perform a non-maximum suppression NMS algorithm on a plurality of candidate boundary shapes.
  20. 20. The electronic device of claim 11, wherein the electronic device is a local electronic device of the SDC.

Description

Method for controlling the operation of an autonomous vehicle and electronic device Cross reference to The present application claims priority to russian patent application No. 2024132913 entitled "method and electronic device for controlling the operation of an autonomous vehicle (Methods and Electronic Devices for Controlling Operation of a Self-DRIVING CAR)" filed on 1 month 11 of 2024, which application is incorporated herein by reference in its entirety. Technical Field The present technology relates generally to autopilot and, more particularly, to a method and electronic device for controlling operation of an autopilot vehicle (SDC). Background Autopilot is a technology that enables a vehicle to self-drive without (or with little) human intervention through the use of various sensors, computer systems, and algorithms. For example, some sensors for autopilot include cameras, lidar, radar, and GPS, among others. Cameras are optical devices that capture images of the surrounding environment. They may provide visual information such as color, texture, shape, and motion of objects in the scene. The camera may also recognize road signs, traffic lights, and lane markings. Lidar is a sensor that emits a laser beam and measures the time required for the laser beam to bounce back from an object in the environment. Lidar may create a 3D point cloud that represents the shape, size, and location of objects in a scene. Lidar may also measure the distance and speed of objects. A radar is a sensor that emits radio waves and measures the time required for the radio waves to bounce back from objects in the environment. GPS is a system that uses satellites to determine the geographic location and altitude of a vehicle. GPS may provide rough information about the position and orientation of the vehicle. To achieve autopilot, the computer system needs to perform at least three functions, sensing, planning and control. These functions may be implemented via separate computer modules that communicate and cooperate with each other to achieve the desired behavior of the vehicle. Each module may use different sensors, models, and algorithms to perform its respective tasks, depending on, among other things, the level of autonomy and requirements of a given scenario. It is difficult to design a system to autonomously and safely drive a vehicle. An autonomous vehicle should be able to behave as a functional equivalent to an attentive driver who utilizes a perception and action system with the excellent ability to recognize moving and static obstacles and react in a complex environment to avoid collisions with other objects or structures along the path of the vehicle. Thus, the ability to detect examples of activity (e.g., objects, automobiles, pedestrians, etc.) and other parts of the environment is necessary for an autopilot awareness system. Conventional perception methods rely on cameras or lidar sensors to detect objects in the environment, and various methods have been developed that use Deep Neural Networks (DNNs) to perform object detection. Some DNNs perform "bird's eye view" (BEV) object detection. BEV maps are the result of transforming a multi-dimensional representation of the surrounding environment into a 2D image showing the scene from a top-down perspective. This may help reduce the complexity of the data and make it easier to apply computer vision techniques for target detection and localization. U.S. patent publication 2022/0289237 discloses a map-free general obstacle detection for a collision avoidance system. Disclosure of Invention The developers of the present technology have recognized at least some drawbacks of known solutions for object detection in the context of an autonomous car (SDC). In general, the object detection module of an SDC is configured to locate and classify objects in the environment of the SDC, among other things. It can be said that an object is "detected" when the object detection module generates a boundary shape for a portion of the map of the environment. The object detection module may also assign a label/category to the boundary shape that indicates a category of objects located in a corresponding portion of the map. Initially, the object detection module gathers data from various sensors, such as cameras, lidar, and radar, for example, and indicative of the surrounding environment of the vehicle. This data may undergo preprocessing to correct for distortion and/or remove noise, ensuring that the information is accurate and synchronized across different sensor types. The data from the different sensors may be combined or "fused" into a combined representation of the surrounding environment. This combined representation may include a plurality of features, such as edges, shapes, colors, and patterns, for example, and the features may be used to distinguish objects in an environment. This combined representation comprising a plurality of features is analyzed by a Neural Network (NN