WO-2026093239-A1 - ENHANCEMENT OF SEMANTIC PERCEPTION IN AN IMAGE BY CONTROLLING VEHICLE ILLUMINATION
Abstract
The invention relates to a method comprising, on obtaining (500; 501) a first image representative of a scene facing the vehicle, determining (502) a light intensity map, on the basis of a light intensity map generation model and of the first image obtained, the light intensity map indicating light intensity values for controlling lighting elements of a matrix source of the illumination module of the vehicle. The determined light intensity map is transmitted (503) to the illumination module for the projection of a pixelated illuminating light beam onto the scene facing the vehicle. A second image representative of the scene facing the vehicle is obtained following the projection of the pixelated illuminating beam, and an image processing operation is carried out on the second image obtained, the image processing operation being a semantic perception image processing operation.
Inventors
- DE-MOREAU, Simon
- ALMEHIO, Yasser
- MOUTARDE, FABIEN
- Bogdan, Stanciulescu
Assignees
- VALEO VISION
- ASSOCIATION POUR LA RECHERCHE ET LE DÉVELOPPEMENT DES MÉTHODES ET PROCESSUS INDUSTRIELS - ARMINES
- ECOLE NATIONALE SUPERIEURE DES MINES DE PARIS
Dates
- Publication Date
- 20260507
- Application Date
- 20251027
- Priority Date
- 20241029
Claims (10)
- 1. Method for controlling a vehicle (100) lighting module (120), the method comprising, during a typical phase, the following steps: - obtaining (500: 501) a first representative image of a scene facing the vehicle; - determination (502) of a light intensity map, from a light intensity map generation model and the first image obtained, the light intensity map generation model (131) being configured to receive as input at least the first image representative of the scene and to generate as output the light intensity map, the light intensity map indicating light intensity values to control light elements (210) of a matrix source (200) of the vehicle lighting module; - transmission (503) of the determined light intensity map to the lighting module, for projection of a pixelated lighting beam into the scene facing the vehicle; - obtaining a second image representative of the scene facing the vehicle, following the projection of the pixelated lighting beam; - application (505) of an image processing to the second image obtained, the image processing being a semantic perception processing of the image.
- 2. Method according to claim 1, wherein the second image processed by the image processing is transmitted (506) to a driver assistance module (160) of the vehicle, capable of implementing at least one driver assistance function as a function of said second processed image.
- 3. A method according to claim 1 or 2, wherein the light intensity map generation model (131) has one of the following structures: - a convolutional neural network; - an artificial neural network of the autoencoder or autoencoder type variational; - a self-aware or transformative model; or - a network generating a system of antagonistic generative networks.
- 4. A method according to any one of the preceding claims, further comprising a training phase of the light intensity map generation model, comprising a modification (606) of at least one parameter of the light intensity map generation model as a function of an evaluated loss (604) from an output of a training image processing algorithm applied by a training image processing module (403), the training image processing being a semantic perception processing.
- 5. Method according to claim 4, wherein the image processing algorithm for training the training phase is identical to the image processing applied during the current phase.
- 6. A method according to claim 4 or 5, wherein the training phase comprises the following steps: - obtaining (600) an association of training data, the association comprising a representative training image of a scene and reference data; - application (601) of the light intensity map generation model to the training image, to obtain a training light intensity map; - obtaining (602) a synthetic image representative of the scene of the training image into which is projected a beam of lighting obtained according to the training light intensity map; - application (603) of the image processing algorithm for image training synthetic, to obtain a processed training image; - evaluation (604) of a loss by comparison of the processed training image with the reference data; - modification (606) of at least one parameter of the intensity light map generation model (131) as a function of the evaluated loss.
- 7. A method according to claim 6, wherein the training phase further comprises a modification (607) of at least one parameter of the training image processing algorithm, and wherein the image processing applied during the current phase corresponds to the training image processing algorithm at the end of the training phase.
- 8. A method according to any one of the preceding claims, wherein the matrix source (200) of the lighting module (120) comprises electroluminescent semiconductor elements (210) of submillimeter dimensions, epitaxially mounted directly on a common substrate.
- 9. Vehicle assembly (100) comprising: - a camera (140) arranged to obtain a first image representative of a scene facing the vehicle; - a lighting module (120) comprising a matrix source (200) comprising a plurality of individually controllable light elements (210); - a control device (130) configured to determine a light intensity map, from a light intensity map generation model (131) and at least the first image obtained, the light intensity map generation model being configured to receive as input at least the first image representative of the scene and to generate as output the light intensity map, the light intensity map indicating light intensity values to control the luminous elements of the matrix source of the vehicle lighting module; the control device being capable of transmitting the determined light intensity map to the lighting module, for projection of a pixelated lighting beam into the scene facing the vehicle; wherein, upon obtaining a second image representative of the scene facing the vehicle by the camera, following the projection of the pixelated lighting beam, an image processing module (150) is configured to apply image processing to the second image obtained, the image processing being semantic perception processing of the image.
- 10. A vehicle (100) comprising an assembly according to claim 9, further comprising a driver assistance module (160), wherein the driver assistance module is configured to implement at least one driver assistance function based on the second image processed by the processing module.
Description
Description Title: Improving semantic perception in an image by controlling vehicle lighting The present invention relates to the field of controlling a lighting module in a motor vehicle. More specifically, the invention concerns a method and a control module for a motor vehicle lighting module, to improve a semantic perception function, for example, object detection or semantic segmentation of an image captured in the motor vehicle. Most motor vehicles are now equipped with a driver assistance module, also called ADAS, for “Advanced Driver-Assistance Systems”, capable of implementing at least one driver assistance function, allowing the driver to be assisted in driving or to control, in an automated manner, without driver input, certain driving parameters of the vehicle. Such functions are based on data captured by sensors on the vehicle, such as lidar, radar, one or more cameras, etc. Several driver assistance functions implemented by ADAS modules use images captured by a camera capable of producing representative images of a scene facing the vehicle. However, some ADAS functions require these images to be processed before they can be used. Such processing can include, as is known: - Semantic segmentation aims to segment each captured image into several pixel regions, each region being labeled with a class from a set of predefined classes. In automotive applications, the following classes might be used: car, pedestrian, sign, road, etc. A segmentation score can be associated with the image segmentation, or with each segment identified within the image, the score representing the degree of certainty associated with segmentation; and/or - Object detection aims to detect one or more objects in the scene represented by each image, to identify the category of each detected object from among several predefined categories, and to determine the position of each detected object (or a part of the image in which the object is located). Each detected object is also associated with a detection score representing the certainty associated with the detection. However, such processing is less effective in low or very low ambient light, such as when the camera captures images representative of a night scene. In such situations, the segmentation or detection score is significantly lower than in daytime driving conditions, which can prevent the implementation of certain driver assistance functions and even lead to safety issues. There is therefore a need to improve the accuracy associated with semantic segmentation or object detection in a scene facing a motor vehicle, when ambient light is low. The present invention improves the situation. A first aspect of the invention relates to a method for controlling a vehicle lighting module, the method comprising, during a typical phase, the following steps: - obtaining a first representative image of a scene facing the vehicle; - determination of a light intensity map, from a light intensity map generation model and at least the first image obtained, the light intensity map generation model being configured to receive as input at least the first image representative of the scene and to generate as output the light intensity map, the light intensity map indicating light intensity values to control light elements of a matrix source of the vehicle's lighting module; - transmission of the determined light intensity map to the lighting module, for projection of a pixelated lighting beam into the scene facing the vehicle; - obtaining a second image representative of the scene facing the vehicle, following the projection of the pixelated lighting beam; - application of image processing to the second image obtained, the image processing being a semantic perception processing. Thus, the invention makes it possible to project a beam of light to obtain a second image representative of a scene. This second image has a greater capacity than the first image, obtained before the lighting was controlled, to be processed by semantic perception, for example, by semantic segmentation or object detection, particularly in night scenes. As a result, semantic perception processing is improved. Indeed, the light intensity map generation model can be designed, for example, trained by machine learning, particularly supervised learning, specifically to improve the performance of the image processing implemented in the vehicle. That is to say, the performance level of the semantic perception processing (for example, an object detection score or a semantic segmentation score) is higher for the second image than for the first. Moreover, a high degree of responsiveness is allowed for lighting control (represented by the time between obtaining the first image and the second image), since the light intensity map generation model is capable of directly producing an output command that can be used to control the matrix source of the lighting module. According to some embodiments, the second image processed by the