CN-122029579-A - Readable storage medium, vehicle, image ranging method and system
Abstract
The application belongs to the field of intelligent driving, and provides an image ranging method, wherein an image is acquired by a camera, the method comprises the steps of identifying a vehicle target frame based on the image acquired by the camera, judging fusion weight conditions of the vehicle target frame, responding to the fact that the vehicle target frame does not meet the fusion weight conditions, obtaining the distance between the camera and a vehicle corresponding to the vehicle target frame in a world coordinate system according to the landing place of the vehicle target frame, responding to the fact that the vehicle target frame meets the fusion weight conditions, carrying out fusion calculation on a first ranging value and a second ranging value to obtain the distance between the camera and the vehicle corresponding to the vehicle target frame in the world coordinate system, wherein the first ranging value is obtained based on the landing place of the vehicle target frame, and the second ranging value is obtained based on the width and/or the height of the vehicle target frame, so that an accurate and reliable ranging result is obtained. The application also provides an image ranging system, a readable storage medium and a vehicle.
Inventors
- PENG MINGLONG
- ZHANG XIAOTENG
- LIU YANG
Assignees
- 华域视觉科技(上海)有限公司
Dates
- Publication Date
- 20260512
- Application Date
- 20240912
Claims (15)
- A method of ranging images acquired by a camera, the method comprising: identifying a vehicle target frame based on the camera acquired image; carrying out fusion weight condition judgment on the vehicle target frame; Responding to the fact that the vehicle target frame does not meet the fusion weight condition, and obtaining the distance between the camera and the vehicle corresponding to the vehicle target frame in a world coordinate system according to the landing point of the vehicle target frame; And in response to the vehicle target frame meeting the fusion weight condition, fusing and calculating a first ranging value and a second ranging value to obtain the distance between the camera and the vehicle corresponding to the vehicle target frame under a world coordinate system, wherein the first ranging value is obtained based on the landing point of the vehicle target frame, and the second ranging value is obtained based on the width and/or the height of the vehicle target frame.
- The image ranging method of claim 1, wherein the fusion weight conditions comprise one or more of the following fusion weight conditions: The first fusion weight condition is that the intersection ratio of a tracking frame obtained based on the historical track information of the vehicle target frame and the vehicle target frame obtained at the current moment is smaller than a set threshold value; a second fusion weight condition that the difference value between the first ranging value and the second ranging value exceeds a set threshold value; And obtaining historical coordinate parameters of the landing points based on the historical track information of the vehicle target frame, wherein the historical coordinate parameters are unstable.
- The image ranging method according to claim 1, wherein the fusing the first ranging value and the second ranging value to calculate a distance between the camera and the vehicle corresponding to the vehicle target frame in the world coordinate system includes: based on the average value of the first ranging value and the second ranging value, respectively calculating the fusion weights of the first ranging value and the second ranging value according to a fusion weight formula; And calculating according to a weight ranging formula based on the first ranging value, the second ranging value and the fusion weight of the first ranging value and the second ranging value, and obtaining the distance between the camera and the vehicle corresponding to the vehicle target frame under the world coordinate system.
- The image ranging method as claimed in claim 3, wherein, The fusion weight formula is as follows, k1=1-Z/Z 0 k2=Z/Z 0 Wherein, Z is the average value of the first ranging value and the second ranging value, Z 0 is the set range value of the fusion weight, k1 is the fusion weight of the first ranging value, and k2 is the fusion weight of the second ranging value; The weight ranging equation is as follows: zlast =k1×z1+k2×z2 Wherein Zlast is the distance between the camera and the vehicle corresponding to the vehicle target frame in the world coordinate system, Z1 is the first ranging value, and Z2 is the second ranging value.
- The image ranging method as defined in claim 1, wherein the identifying a vehicle target frame based on the camera-captured image comprises: acquiring attribute information of a car light detection frame in the camera acquisition image, wherein the attribute information of the car light detection frame at least comprises car light width and position; and obtaining a vehicle target frame corresponding to the car light detection frame based on the attribute information of the car light detection frame.
- The image ranging method as set forth in claim 5, wherein said deriving a vehicle target frame corresponding to the lamp detection frame based on attribute information of the lamp detection frame includes: Judging the type of the vehicle based on the attribute information of the vehicle lamp detection frame; And acquiring the width and the height of the corresponding vehicle target frame according to the vehicle category, and obtaining the coordinate information of the vehicle target frame based on the coordinate information of the vehicle lamp detection frame.
- The image ranging method according to claim 1, wherein in the case that there is a delay in a system to which the image ranging method is applied, a delay prediction is performed on a distance between the camera and a vehicle corresponding to the vehicle target frame in a world coordinate system, the delay prediction including: acquiring track information of the vehicle target frame, wherein the track information at least comprises a ranging value and a time stamp; Obtaining a ranging value-time linear fitting equation based on the time stamp of the vehicle target frame and the ranging value, and performing parameter stability analysis on the ranging value-time linear fitting equation; Responding to the parameter stabilization of the ranging value-time linear fitting equation, calculating a first ranging value of the system delay time according to the ranging value-time linear fitting equation, and predicting the distance between the camera and the corresponding vehicle of the vehicle target frame under the world coordinate system of the system delay time based on the first ranging value; Responding to the unstable parameter of the ranging value-time linear fitting equation, and predicting the distance between the camera and the corresponding vehicle of the vehicle target frame under the world coordinate system of the delay time of the system according to the ranging value of the current time of the vehicle target frame; the system delay time is the time obtained by adding the system delay time to the current time.
- The image ranging method as claimed in claim 7, wherein the ranging value-time linear fitting equation is subjected to a parameter stability analysis according to a maximum difference method.
- The method of claim 7, wherein the predicting delay further comprises smoothing predicted range values of distances between the camera and the corresponding vehicle of the vehicle target frame in a world coordinate system at a system delay time.
- The image ranging method as set forth in claim 9, wherein the smoothing the predicted ranging value of the distance between the camera and the corresponding vehicle of the vehicle target frame in the world coordinate system at the system time delay time comprises: acquiring a predicted ranging value of a time before a system delay time; Smoothing based on the predicted ranging value of the previous time of the system delay time and the predicted ranging value of the system delay time to obtain a predicted smoothed ranging value, where the formula is: smooth: value=zp_back+zp_cur (1-s) Value is a predicted smooth ranging value, zp_back is a predicted ranging value at the previous time of the system delay time, zp_cur is a predicted ranging value at the system delay time, and s is a scale factor; and predicting the distance between the camera and the vehicle corresponding to the vehicle target frame under the world coordinate system at the time of delay of the system based on the prediction smooth ranging value.
- The image ranging method according to any one of claims 1 to 10, further comprising: calibrating internal parameters, external parameters and distortion parameters of the camera, And correcting the car light detection frame or the vehicle target frame based on the internal parameters, the external parameters and the distortion parameters of the camera.
- The image ranging method as claimed in claim 11, further comprising: Updating vanishing point coordinate parameters according to a vanishing point coordinate set in a set time period, wherein vanishing point coordinates in the vanishing point coordinate set are obtained by calculating lane lines obtained based on image detection acquired by the camera in real time in response to the vehicle speed being greater than a set threshold; and updating the external parameters of the camera based on the updated vanishing point coordinate parameters.
- An image ranging system, comprising: a camera (100) for capturing images; An identification module (200) for identifying a vehicle target frame based on the camera acquired image; The judging module (300) is used for judging the fusion weight condition of the vehicle target frame; The ranging module (400) is used for selecting a ranging mode according to the judgment result of the fusion weight condition so as to obtain the distance between the camera and the vehicle corresponding to the vehicle target frame in the world coordinate system; The ranging module (400) is configured to obtain a distance between the camera and a vehicle corresponding to the vehicle target frame in a world coordinate system according to a landing point of the vehicle target frame in response to the vehicle target frame not meeting the fusion weight condition, to calculate a distance between the camera and the vehicle corresponding to the vehicle target frame in the world coordinate system in a fusion manner according to a first ranging value and a second ranging value in response to the vehicle target frame meeting the fusion weight condition, wherein the first ranging value is obtained based on the landing point of the vehicle target frame, and the second ranging value is obtained based on the width and/or the height of the vehicle target frame.
- A readable storage medium, characterized in that instructions of the storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the image ranging method of any of claims 1-12.
- A vehicle comprising the image ranging system of claim 13 and/or the readable storage medium of claim 14.
Description
Readable storage medium, vehicle, image ranging method and system Technical Field The application belongs to the field of intelligent driving, and particularly relates to an image ranging method, an image ranging system, a readable storage medium and a vehicle. Background Along with the continuous upgrading and iteration of the software and hardware configuration of the intelligent system of the automobile, the intelligent automobile lamp with the interaction function is common, the position of a perception target relative to the automobile lamp is accurately measured, the interaction function of the intelligent automobile lamp is very critical, for example, the ADB shielding function of an LED headlight is realized, the projection function is realized by a digital projection headlight DLP, and if accurate perception measurement cannot be realized, the interaction of the intelligent headlight is not good. At present, the cost and the practicability are comprehensively considered, and the monocular measurement method is mainly used for realizing the ranging of a perceived target, namely, a picture captured by a monocular camera is utilized to identify the position of the target on an image by utilizing a perception algorithm, and the actual space coordinate position of the target is mapped according to the imaging principle of the camera. Monocular ranging relies primarily on geometric information of objects in the image for distance calculation. The common method is a triangulation method, as shown in fig. 1, a is a vehicle, a front vehicle is B and C, a camera is P, a focal length of the camera is f, a height of the camera is H, horizontal straight line distances between the camera and the front vehicle B, C are Z1 and Z2, and projections of the landing points of the detection frames on the image are y1 and y2, respectively, so that y1=fxh/Z1 and y2=fxh/Z2 can be obtained according to similar triangles; the above equation is converted to z1=f×h/y1 and z2=f×h/y2. Therefore, it is necessary to know the coordinates of the landing point of the detection frame corresponding to the object in the image to calculate the actual spatial distance of the target. However, the current distance detection method may cause camera shake due to bump of the vehicle, so that the problem that the imaging quality of the imaging image of the target on the camera is poor exists, and a great error will be generated according to the landing measurement result, so that wrong car light interaction is performed. Content of the application The application aims to solve the technical problem of providing an image ranging method, an image ranging system, a readable storage medium and a vehicle, which can accurately measure a front vehicle target and ensure reliable and accurate ranging results. To solve the above technical problem, a first aspect of the present application provides an image ranging method, in which an image is acquired by a camera, the method comprising: identifying a vehicle target frame based on the camera acquired image; carrying out fusion weight condition judgment on the vehicle target frame; Responding to the fact that the vehicle target frame does not meet the fusion weight condition, and obtaining the distance between the camera and the vehicle corresponding to the vehicle target frame in a world coordinate system according to the landing point of the vehicle target frame; And in response to the vehicle target frame meeting the fusion weight condition, fusing and calculating a first ranging value and a second ranging value to obtain the distance between the camera and the vehicle corresponding to the vehicle target frame under a world coordinate system, wherein the first ranging value is obtained based on the landing point of the vehicle target frame, and the second ranging value is obtained based on the width and/or the height of the vehicle target frame. In some embodiments, the fusion weight conditions include one or more of the following fusion weight conditions: The first fusion weight condition is that the intersection ratio of a tracking frame obtained based on the historical track information of the vehicle target frame and the vehicle target frame obtained at the current moment is smaller than a set threshold value; a second fusion weight condition that the difference value between the first ranging value and the second ranging value exceeds a set threshold value; And obtaining historical coordinate parameters of the landing points based on the historical track information of the vehicle target frame, wherein the historical coordinate parameters are unstable. In some embodiments, the fusing the first ranging value and the second ranging value to calculate a distance between the camera and the vehicle corresponding to the vehicle target frame in the world coordinate system includes: based on the average value of the first ranging value and the second ranging value, respectively calculating the fusion weigh