CN-121414856-B - Vehicle-mounted camera external parameter calibration method, electronic equipment and program product
Abstract
The disclosure provides a vehicle-mounted camera external parameter calibration method, electronic equipment and a program product, and relates to the technical field of vehicles. The method comprises the steps of acquiring road image sequences corresponding to cameras of a vehicle when camera external parameter dynamic calibration starting conditions are met, acquiring images of at least one structured road and corresponding acquisition time in each group of road image sequences, determining vanishing point positions and horizon positions corresponding to the cameras by using an end-to-end vanishing point/horizon detection model based on each group of road image sequences, wherein the end-to-end vanishing point/horizon detection model is used for outputting vanishing point positions in the road image sequences or outputting vanishing point positions and horizon positions in the road image sequences, and determining external parameters of the cameras according to the vanishing point positions and the horizon positions corresponding to the cameras. The method and the device are used for providing an efficient and concise solution for external parameter calibration of the vehicle-mounted camera.
Inventors
- LI ZHIPENG
- LI BAOXIANG
Assignees
- 浙江吉利控股集团有限公司
- 吉利汽车研究院(宁波)有限公司
Dates
- Publication Date
- 20260512
- Application Date
- 20251224
Claims (8)
- 1. The vehicle-mounted camera external parameter calibration method is characterized by comprising the following steps of: Acquiring a road image sequence corresponding to each camera of a vehicle when the camera external parameter dynamic calibration starting condition is met, wherein each group of cameras respectively corresponds to one group of road image sequences, each group of road image sequences comprises at least one image of a structured road and corresponding acquisition time, and the images of the at least one structured road are arranged according to the sequence of the acquisition time; Based on each group of the pre-processed image sequences, determining the vanishing point positions and the horizon positions corresponding to cameras by using an end-to-end vanishing point/horizon detection model, wherein the end-to-end vanishing point/horizon detection model is used for outputting the vanishing point positions in the road image sequences or outputting the vanishing point positions and the horizon positions in the road image sequences; The method comprises the steps of preprocessing an image sequence, dividing the preprocessed image sequence into a plurality of image subsequences, dividing the vanishing point position and the horizon position of each camera into a plurality of combinations based on time periods of the image subsequences corresponding to the vanishing point position and the horizon position, wherein one combination comprises one vanishing point position and one horizon position of each camera, respectively determining initial external parameters of each camera based on the vanishing point and the horizon position included in each combination for each combination, jointly optimizing the initial external parameters of each camera based on image association of each camera, and respectively determining final external parameters of each camera based on the initial external parameters of each camera in each combination for each camera.
- 2. The method of claim 1, wherein determining the vanishing point position and the horizon position for each camera using an end-to-end vanishing point/horizon detection model based on each set of the preprocessed image sequences comprises: Dividing the preprocessed image sequence into a plurality of image subsequences, respectively inputting each image subsequence into the end-to-end vanishing point/horizon detection model, and obtaining the vanishing point position corresponding to each image subsequence of the corresponding camera output by the end-to-end vanishing point/horizon detection model; And fitting the positions of the vanishing points corresponding to each camera to obtain the horizon positions corresponding to the cameras.
- 3. The method of claim 1, wherein determining the vanishing point position and the horizon position for each camera using an end-to-end vanishing point/horizon detection model based on each set of the preprocessed image sequences comprises: And respectively inputting each image sub-sequence into the end-to-end vanishing point/horizon detection model to obtain the vanishing point position and horizon position corresponding to each image sub-sequence of the corresponding camera output by the end-to-end vanishing point/horizon detection model.
- 4. A method according to claim 3, wherein the training process of the end-to-end vanishing point/horizon detection model comprises: Training the deep learning model by using first training data to obtain an end-to-end vanishing point/horizon detection model, wherein the first training data comprises a first training image sequence, each image in the first training image sequence is marked with a vanishing point true value position and a horizon true value position, and the vanishing point true value position is positioned on a straight line expressed by the horizon true value position; Or alternatively Training the deep learning model by using second training data to obtain the end-to-end vanishing point/horizon detection model, wherein the second training data comprises a second training image sequence, and the true value position of the vanishing point is marked on each image in the second training image sequence.
- 5. The method according to claim 2, wherein the acquiring the road image sequence corresponding to each camera of the vehicle when the camera external parameter dynamic calibration starting condition is satisfied comprises: When the camera external parameter dynamic calibration starting condition is met, controlling the vehicle to change lanes in the running process of the vehicle, and acquiring images acquired by each camera on the structured road at different moments in the lane changing process to obtain a road image sequence corresponding to each camera.
- 6. The method according to claim 1, wherein the method further comprises: when the determined external parameters of the camera meet the updating conditions, updating the parameters of the camera by adopting the external parameters of the camera; The updating condition comprises that the absolute value of the difference between the roll angle in the external parameter of the camera and the reference roll angle is smaller than a first threshold value, the absolute value of the difference between the pitch angle in the external parameter of the camera and the reference pitch angle is smaller than a second threshold value, and the absolute value of the difference between the yaw angle in the external parameter of the camera and the reference yaw angle is smaller than a third threshold value.
- 7. An electronic device, comprising: at least one processor, and A memory communicatively coupled to the at least one processor, wherein, The memory stores at least one computer program executable by the at least one processor to enable the at least one processor to perform the on-board camera external parameter calibration method according to any one of claims 1-6.
- 8. A computer program product, characterized in that the computer program product comprises a computer program which, when run in a processor, implements the on-board camera external parameter calibration method according to any one of claims 1-6.
Description
Vehicle-mounted camera external parameter calibration method, electronic equipment and program product Technical Field The disclosure relates to the technical field of vehicles, in particular to a vehicle-mounted camera external parameter calibration determination method, electronic equipment and a program product. Background The vehicle-mounted camera is the most important sensor of the vehicle-mounted visual perception system and provides basic perception data required by decision making for the intelligent auxiliary driving system. The accuracy and the effectiveness of the basic perception data provided by the vehicle-mounted visual perception system depend on the accurate calibration result of the vehicle-mounted camera. The camera can have pose offset in the working period after delivery due to vibration of a vehicle body, maintenance of the vehicle, aging of firmware and the like, so that the camera external parameters are degraded, and the accuracy and the effectiveness of a visual perception system are affected. The external parameter calibration (without laser and camera combined calibration) of the vehicle-mounted camera of the current pure vision algorithm is mainly based on a static calibration method based on a fixed target and a dynamic calibration method based on a structured road lane line. The dynamic calibration method based on the structured road lane line is to detect the lane line by a deep learning model algorithm, then solve a linear equation or vanishing point by a series of filtering methods, and then utilize the prior of the physical world parallelism of a plurality of lane lines or the prior of the equal width of the lanes to continuously optimize and solve the camera external parameters. Disclosure of Invention In view of this, the embodiments of the present disclosure provide a method, an electronic device and a program product for calibrating an external parameter of a vehicle-mounted camera, so as to provide an efficient and concise solution for calibrating the external parameter of the vehicle-mounted camera. In a first aspect, the present disclosure provides a method for calibrating an external parameter of a vehicle-mounted camera, including: acquiring road image sequences corresponding to cameras of a vehicle when camera external parameter dynamic calibration starting conditions are met, wherein each group of road image sequences comprises at least one image of a structured road and corresponding acquisition time; Determining the vanishing point position and the horizon position corresponding to each camera by using an end-to-end vanishing point/horizon detection model based on each group of the road image sequences, wherein the end-to-end vanishing point/horizon detection model is used for outputting the vanishing point position in the road image sequences or outputting the vanishing point position and the horizon position in the road image sequences; And determining the external parameters of each camera according to the vanishing point position and the horizon position corresponding to each camera. In a second aspect, the present disclosure provides an electronic device comprising: at least one processor, and A memory communicatively coupled to the at least one processor, wherein, The memory stores at least one computer program executable by the at least one processor to enable the at least one processor to perform the on-board camera exogenous calibration method as described in the first aspect. In a third aspect, the present disclosure provides a computer program product comprising a computer program which, when run in a processor, implements the on-board camera exogenous calibration method of the first aspect. Alternatively, the computer program may be stored on a readable storage medium or a cloud of a computer device, from which the processor of the computer device reads the computer program. According to the embodiment provided by the disclosure, when the starting condition of camera external parameter dynamic calibration is met, image acquisition is carried out on a structured road to obtain a road image sequence of each camera, the vanishing point position and the horizon position corresponding to each camera are determined in each group of road image sequences through the end-to-end vanishing point/horizon detection model, and then the external parameter of each camera can be determined according to the vanishing point position and the horizon position. Drawings In order to more clearly illustrate the embodiments of the present disclosure or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is apparent that the drawings in the following description are only embodiments of the present disclosure, and other drawings may be obtained according to the provided drawings without inventive effort to those of ordinary skill in the art. Fig. 1