DE-102024138167-A1 - METHOD FOR DYNAMIC CALIBRATION OF A CAMERA IN A VEHICLE AND SYSTEM FOR THIS
Abstract
The present disclosure provides a method and a system for the dynamic calibration of an in-vehicle camera (102). In particular, the present disclosure provides a system comprising a processor (214) that extracts three-dimensional (3D) coordinates of a plurality of vehicle contours from a pre-stored design model (208) of a vehicle. The processor (214) further detects, using a trained model (210), two-dimensional (2D) pixel coordinates corresponding to a plurality of edges of the plurality of vehicle contours from a plurality of input images. Furthermore, based on the 3D coordinates and the 2D pixel coordinates corresponding to the plurality of edges of the plurality of vehicle contours, the processor (214) calculates one or more camera parameters for the dynamic calibration of the vehicle camera (102).
Inventors
- Manthan Sharma
- Raghavendra Dakshinamurthy Lellapalli
- Raghul Venkataraman
- Ashwini Poovaiah
Assignees
- Mercedes-Benz Group AG
Dates
- Publication Date
- 20260513
- Application Date
- 20241217
- Priority Date
- 20241111
Claims (10)
- Method for dynamically calibrating an in-vehicle camera (102), comprising: Extracting (402) three-dimensional (3D) coordinates of a plurality of vehicle contours from a prestored design model (208) of a vehicle; Detecting (404), using a trained model (210), two-dimensional (2D) pixel coordinates corresponding to a plurality of edges (104, 106, 108) of the plurality of vehicle contours from a plurality of input images, wherein the trained model (210) preserves a feature space dimension of each input image of the plurality of input images while detecting the 2D pixel coordinates; and Computing (406), based on the 3D coordinates and the 2D pixel coordinates corresponding to the plurality of edges of the plurality of vehicle contours, one or more camera parameters for dynamically calibrating the in-vehicle camera.
- Procedure according to Claim 1 , with each of the multiple input images being captured by the onboard camera (102).
- Procedure according to Claim 1 , wherein one or more camera parameters are associated with the position and orientation of the onboard camera and include at least one of the following values: a rotation value and a translation value.
- Procedure according to Claim 1 , wherein the calculation of one or more camera parameters comprises: estimating a correspondence between the 3D coordinates and the 2D pixel coordinates corresponding to the plurality of edges of the plurality of vehicle contours; and calculating the one or more camera parameters based on the estimated correspondence using a pose estimation technique.
- Procedure according to Claim 1 , wherein the model is trained by: providing a training image for a plurality of convolution blocks (304-322) of the model, the training image being associated with a corresponding feature space dimension; predicting one or more 2D pixel coordinates in each convolution block that are associated with one or more edges corresponding to one or more vehicle contours, preserving the feature space dimension of the training image; computing a loss function value at each convolution block based on an accuracy of prediction corresponding to a respective convolution block, each loss function value comprising one or more components configured to enable an accurate prediction of the one or more 2D pixel coordinates associated with the one or more edges; and computing a net loss function value based on the loss function value computed at each convolution block.
- Procedure according to Claim 5 , furthermore, comprehensive iterative training of the model on a large number of training images to minimize the value of the net loss function.
- System for the dynamic calibration of a vehicle camera (102), the system comprising: a memory (206); a processor (214) operationally coupled to the memory (206), the processor (214) being configured to: extract three-dimensional (3D) coordinates of a plurality of vehicle contours from a prestored design model (208) of a vehicle; detect, using a trained model (210), two-dimensional (2D) pixel coordinates corresponding to a plurality of edges of the plurality of vehicle contours from a plurality of input images, the trained model (210) preserving a feature space dimension of each input image of the plurality of input images while detecting the 2D pixel coordinates; and to calculate one or more camera parameters for dynamic calibration of the vehicle's in-vehicle camera based on the 3D coordinates and the 2D pixel coordinates corresponding to the multiple edges of the multiple vehicle contours.
- System according Claim 7 , with each of the multiple input images being captured by the onboard camera (102).
- System according Claim 7 , wherein one or more camera parameters are associated with the position and orientation of the onboard camera and include at least one of the following values: a rotation value and a translation value.
- System according Claim 7 , wherein the processor (214) is configured to calculate one or more camera parameters such that it: estimates a match between the 3D coordinates and the 2D pixel coordinates that the A multitude of edges correspond to the multitude of vehicle contours; and calculate one or more camera parameters based on the estimated correspondence using a pose estimation technique.
Description
TECHNICAL AREA The present invention relates generally to the field of vehicle cameras and in particular to the provision of a method and a system for the dynamic calibration of a camera mounted in the interior of a vehicle. BACKGROUND The following description contains information that may be useful for understanding the present invention. It does not constitute an admission that the information contained herein forms part of the prior art or is relevant to the present invention, or that any publication to which express or implicit reference is made forms part of the prior art. Modern vehicles are equipped with numerous driver assistance and safety features, such as driver distraction detection, eye-tracking, occupant detection, and the like. These systems rely on images captured by a camera mounted inside the vehicle (hereinafter referred to as the "vehicle camera"). For such functions to operate efficiently, the position and orientation of the vehicle's camera are crucial attributes, ensuring high-quality, accurate images and three-dimensional (3D) transformations within the desired field of view. During vehicle manufacturing, the position and orientation of an in-vehicle camera are calibrated, and the corresponding position and orientation parameters are stored in memory connected to the vehicle's electronic control unit (ECU). However, during continuous driving, vibrations and shocks caused by prolonged and varying driving conditions can cause the in-vehicle camera to shift from its original position. This leads to a discrepancy between the in-vehicle camera's position and orientation parameters as stored in memory and the real-time parameters, which in turn results in malfunctions in the vehicle's driver assistance and/or safety functions. Furthermore, such misalignments and misalignments are only detected when the vehicle is taken to a workshop and a customer reports their dissatisfaction. This scenario is inconvenient for the customer, however, because until the misalignment is detected, the vehicle can be operated with the incorrectly aligned on-board camera, leading to malfunctions in safety-critical functions such as the eye-tracking function. Solutions for the dynamic estimation of the position of an in-vehicle camera exist in the prior art. One such solution is described in the publication. EP3479353B1 (hereinafter referred to as Publication '353') described. Publication 353 describes a method for determining the position of a camera mounted at a known position within or near a vehicle scene. The method includes capturing an image of the vehicle scene by the camera from a current position of the camera, wherein the vehicle scene comprises the interior of a vehicle. In one exemplary aspect, the current camera position comprises a three-dimensional position and orientation of the camera and is defined relative to a predefined reference frame. The method further comprises loading reference data characteristic of the vehicle scene, wherein the reference data comprises a three-dimensional model containing positions and orientations of known features within the vehicle scene, the known features comprising objects within a vehicle cabin that are fixed in time and space relative to a reference frame defined relative to a region of the vehicle. The method further includes identifying the geometric appearance of one or more of the known features within the image and determining the three-dimensional position and orientation of the camera relative to the identified known features from the geometric appearance and calculating a position of the camera within the vehicle scene in the frame of reference. Publication '353' is therefore based solely on the determination of an error between the geometric appearance of the vehicle features captured by the camera and the value obtained from the 3D model of the vehicle in order to estimate the camera position. However, the camera position estimated in publication '353' may not be very accurate, as it relies only on the geometric 3D appearance of vehicle features, which may not be able to accurately describe the geometric appearance of small vehicle features. Therefore, there is a need for a system and a procedure that overcomes the aforementioned limitations. SUMMARY The present disclosure remedies one or more deficiencies of the prior art and offers additional advantages. The embodiments and aspects of the disclosure described in detail herein are considered to be part of the claimed disclosure. In a non-limiting embodiment of the present disclosure, a method for dynamically calibrating an in-vehicle camera is disclosed. The method comprises extracting three-dimensional (3D) coordinates of a plurality of vehicle contours from a pre-stored design model of a vehicle. Furthermore, the method comprises detecting, using a trained model, two-dimensional (2D) pixel coordinates corresponding to a plurality of edges of the plurality of vehicle contours from a plurality of