DE-102024132880-A1 - Locating an emergency vehicle and at least partially automated driving of a vehicle
Abstract
To locate an emergency vehicle, first sensor data (8), which includes camera images (8a, 8b) generated by a camera (5, 6) of an ego vehicle (1) and depicts the environment of the ego vehicle (1), and second sensor data (9), which includes radar sensor data generated by a radar system (4) of the ego vehicle (1) and depicts the environment of the ego vehicle (1), are acquired. By applying a machine learning model (MLM) (10), trained at least for object recognition, to the first sensor data (8), a light pattern of an activated emergency lighting unit is identified. The position of the emergency vehicle is determined by applying at least one trained recurrent neural network (RNN) (11) to the second sensor data (9). A lane in which the emergency vehicle is located is determined based on the determined position.
Inventors
- Satyajit Nayak
- Sankaralingam Madasamy
- Harishankar Chinnathambi
- Nagarajan Balmukundan
Assignees
- VALEO SCHALTER UND SENSOREN GMBH
Dates
- Publication Date
- 20260513
- Application Date
- 20241111
Claims (18)
- A computer-implemented method for locating an emergency vehicle, wherein: - first sensor data (8), comprising camera images (8a, 8b) generated by a camera (5, 6) of the ego vehicle (1) according to a sequence of camera frames and depicting the environment of the ego vehicle (1), are obtained; - second sensor data (9), comprising radar sensor data (9a) generated by a radar system (4) of the ego vehicle (1) according to a sequence of radar frames and depicting the environment of the ego vehicle (1); - by applying a machine learning model (MLM) (10), trained at least for object recognition, to the first sensor data (8), a light pattern of an activated emergency lighting unit of the emergency vehicle is identified; - a position of the emergency vehicle is determined by applying at least one trained recurrent neural network (RNN) (11) to the second sensor data (9); - a lane in which the emergency vehicle is located, depending on its specific position.
- Computer-implemented method according to Claim 1 , wherein the radar sensor data (9a) are Doppler radar sensor data and a speed of the emergency vehicle is determined by at least applying the RNN (11) to the second sensor data (9) and determining the lane on which the vehicle is located depending on the speed of the emergency vehicle.
- Computer-implemented method according to one of the preceding claims, wherein - the camera images (8a, 8b) are camera images (8a) generated by a visible-range camera (5) of the ego-vehicle (1); and - the first sensor data (8) from a thermal imaging camera (6) of the ego vehicle (1) according to a sequence of frames of the thermal imaging camera and thermal imaging camera images (8b) depicting the environment of the ego vehicle (1).
- Computer-implemented method according to Claim 3 , wherein - predictions (13a) for an area of interest, ROI, containing the operational lighting unit, are generated by applying a first object detection module (10a) of the MLM (10) to the camera images (8a) in the visible range; - further predictions (13b) for the ROI containing the operational lighting unit are generated by applying a second object detection module (10b) of the MLM (10) to the thermal imaging camera images (8b); and - the light pattern is identified depending on the predictions (13a) and the further predictions (13b) for the ROI.
- Computer-implemented method according to Claim 4 , wherein - features in the visible range are generated by applying a trained first encoder module (19, 20) of the first object recognition module (10a) to the camera images (8a) in the visible range; and - the predictions (13a) for the ROI are generated by applying a trained first object recognition decoder module (21) of the first object recognition module (10a) to the features in the visible range.
- Computer-implemented method according to Claim 5 , wherein - a segmented representation of a road in the environment is generated by applying a trained semantic segmentation decoder module (16) to the features in the visible area; and - the lane on which the emergency vehicle is located is determined depending on the segmented representation of the road.
- Computer-implemented method according to one of the Claims 4 until 6 , wherein - thermal features are generated by applying a trained second encoder module of the second object detection module (10b) to the thermal camera images (8b); and - the further predictions (13b) for the ROI are generated by applying a trained second object detection decoder module of the second object detection module (10b) to the thermal features.
- Computer-implemented method according to one of the Claims 4 until 7 , wherein semantically segmented ROls are generated based on the predictions (13a) and the further predictions (13b) for the ROI and the light pattern is identified by tracking a state of the operational lighting unit based on the segmented ROls.
- Computer-implemented method according to Claim 8 , where a Bayesian model (14) is used for tracking.
- Computer-implemented method according to one of the preceding claims, wherein the second sensor data (9) includes at least one audio sequence (9b) representing ambient sounds generated by at least one microphone (7a, 7b) of the ego vehicle (1).
- Computer-implemented method according to one of the preceding claims, wherein the RNN (11) is a bidirectional long short-term memory network, LSTM network.
- A method for at least partially automating the operation of a vehicle, wherein a computer-implemented method according to one of the preceding claims is carried out and - at least one control signal for at least partially automating the operation of the ego vehicle (1) is generated depending on the specific lane in which the emergency vehicle is located; and/or - assistance information for assisting a driver of the ego vehicle (1) is generated depending on the specific lane in which the emergency vehicle is located.
- Procedure according to Claim 12 , whereby, depending on the specific lane on which the emergency vehicle is located, a lane change of the Ego vehicle (1) is automatically initiated depending on at least one control signal.
- Procedure according to Claim 12 , wherein the assistance information includes a request to initiate a lane change if the specified lane in which the emergency vehicle is located corresponds to a lane in which the ego vehicle (1) is located.
- data processing system (3) which is set up to execute a computer-implemented procedure according to one of the Claims 1 until 11 to carry out.
- Electronic vehicle guidance system (2) which includes a data processing system (3) according to Claim 15 and includes a control system designed to - to generate at least a control signal for at least partially automatic driving of the ego vehicle (1) depending on the specific lane in which the emergency vehicle is located; and/or - to generate assistance information to assist a driver of the ego vehicle (1) depending on the specific lane in which the emergency vehicle is located.
- Electronic vehicle guidance system (2) according to Claim 16 , which includes the camera (5, 6) and/or the radar sensor system.
- comprising computer program product - commands which, when executed by a data processing system (3), cause the data processing system (3) to execute a computer-implemented procedure according to one of the Claims 1 until 11 to carry out; and/or - further commands which, if issued by the electronic vehicle guidance system (2) according to one of the Claims 16 or 17 to be executed, causing the electronic vehicle guidance system (2) to initiate a procedure according to one of the Claims 12 until 14 to carry out.
Description
The present invention relates to a computer-implemented method for locating an emergency vehicle, wherein initial sensor data, comprising camera images depicting the environment of the ego-vehicle, are obtained. The invention further relates to a method for at least partially automating the control of a vehicle, wherein the aforementioned computer-implemented method is carried out, to a data processing system configured to carry out the aforementioned computer-implemented method, to an electronic vehicle control system incorporating the aforementioned data processing system, and to corresponding computer program products. The smooth and unimpeded movement of emergency vehicles, including motorcycles, cars, and trucks, on roads is a crucial consideration for advanced driver assistance systems (ADAS) and other functions that enable at least partially automated vehicle operation. Emergency vehicles, such as ambulances, fire engines, and police cars, should be accurately detected in real time to allow for appropriate safety measures to be taken when necessary. In particular, locating an emergency vehicle is advantageous for determining whether a safety action, such as changing lanes, is required. Numerous approaches exist that utilize visible-vision cameras and their corresponding images as the basis for emergency vehicle detection using a trained machine learning model (MLM). A challenge lies in reliably detecting the emergency vehicle in both bright and dark scenarios, for example, due to sensor limitations related to camera contrast. Furthermore, reliability and accuracy are often limited to close-range detection, such as 0-25 meters from the vehicle. One objective of the present invention is to provide a means of locating an emergency vehicle which exhibits increased robustness in the face of various environmental conditions. This objective is achieved by the subject matter of the independent claim. Further embodiments and preferred embodiments are the subject matter of the dependent claims. The invention is based on the idea of using a multimodal approach in which at least one type of camera is used to identify a light pattern of an activated emergency lighting unit of the emergency vehicle and a radar system is used to determine the position of the emergency vehicle. According to one aspect of the invention, a computer-implemented method for locating an emergency vehicle, in particular an emergency vehicle in the vicinity of an ego-vehicle, is provided. First sensor data, comprising camera images of the ego-vehicle's environment generated by a camera of the ego-vehicle according to a sequence of camera frames, are obtained, particularly from the camera. Second sensor data, comprising radar sensor data generated by a radar system of the ego-vehicle and also representing the ego-vehicle's environment, are obtained. By applying a machine learning model (MLM), trained at least for object recognition, to the first sensor data, a light pattern of an activated emergency lighting unit of the emergency vehicle is identified. The position of the emergency vehicle is determined by applying at least one trained recurrent neural network (RNN), for example, a long short-term memory (LSTM), in particular a bidirectional LSTM, to the second sensor data. The lane in which the emergency vehicle is located is determined based on its specific position. In other words, determining the lane in which the emergency vehicle is located is equivalent to locating the emergency vehicle. Unless otherwise specified, all steps of the computer-implemented procedure can be performed by a data processing system that includes at least one data processing device, in particular a data processing system of the vehicle. Specifically, the at least one data processing system is configured or adapted to perform the steps of the computer-implemented procedure. For this purpose, the at least one data processing device can, for example, store a computer program containing instructions which, when executed by the at least one data processing device, cause the at least one data processing device to perform the computer-implemented procedure. The terms "data processing system" and "at least one data processing device" may be used synonymously. All data processing devices of the at least one data processing device can be contained within the vehicle. However, it is also possible that all data processing devices of the at least one data processing device are part of an external computing system located outside the vehicle, for example, a mobile electronic device, a backend server, or a cloud computing system. It is also possible that the at least one data processing device comprises at least one vehicle data processing device of the vehicle as well as at least one external data processing device that is contained within the external computing system. The at least one vehicle data processing device can, for example, include one or more electronic control u