Search

CN-117475393-B - Lane line identification processing method and device and vehicle terminal

CN117475393BCN 117475393 BCN117475393 BCN 117475393BCN-117475393-B

Abstract

The application provides a lane line identification processing method and device and a vehicle terminal, and relates to a vehicle driving technology. The method comprises the steps of performing scaling treatment on an original image to obtain a reduced first image, inputting the first image into a preset resolution model to perform lane line detection to obtain a first detection image, performing cutting treatment on the original image to obtain a cut second image, and inputting the second image into a preset crop model to perform lane line detection to obtain a second detection image. And carrying out fusion recognition processing on the first detection image and the second detection image to obtain a target detection result about the lane line to be recognized. The method greatly improves the detection precision of the far-end lane line and solves the technical problem of lower detection precision of the far-end lane line.

Inventors

  • Zhai Jindong
  • LIN MING

Assignees

  • 国汽智控(北京)科技有限公司

Dates

Publication Date
20260512
Application Date
20231031

Claims (9)

  1. 1. The lane line identification processing method is characterized by comprising the following steps of: collecting an original image in front of a road, wherein the original image comprises a plurality of lane lines to be identified; The method comprises the steps of performing scaling processing on an original image to obtain a reduced first image, inputting the first image into a preset size model to perform lane line detection to obtain a first detection image, performing cutting processing on the original image to obtain a cut second image, inputting the second image into a preset crop model to perform lane line detection to obtain a second detection image, wherein a frame to be cut according to the cutting processing is determined based on a principle of ensuring that the lane line has preset pixel width in the second image, wherein the preset size model and the preset crop model are deep learning networks designed in a form of a backup+head, the backup adopts ERFNet, the head fixedly outputs 6 lane lines of different types, and the acquisition modes of the preset size model and the image input by the preset crop model are different; The method comprises the steps of restoring a first detection image and a second detection image to be the size of an original image, fusing the first detection image after the size is restored and the second detection image after the size is restored to obtain a fused image, fusing and identifying a plurality of lane lines to be identified in the fused image, and if the lane lines in the second detection image after the size is confirmed to be the same lane line as the lane lines in the first detection image after the size is restored, replacing the lane lines in the first detection image after the size is restored with the lane lines in the second detection image after the size is restored to obtain a target detection result related to the lane lines to be identified.
  2. 2. The method of claim 1, wherein the acquiring the original image in front of the roadway comprises: And acquiring an original image in front of the road according to a camera preset at the vehicle terminal.
  3. 3. The method according to claim 2, wherein the cropping the original image to obtain a cropped second image comprises: Determining the distance between the camera and a target object in the original image according to a preset formula, wherein the target object is a lane line with the preset pixel width; determining the lower edge coordinates of the frame to be cut according to the distance; Determining coordinates of the frame to be cut according to the lower edge coordinates and the preset length and width of the frame to be cut; And cutting the original image according to the coordinates of the frame to be cut to obtain a cut second image.
  4. 4. A method according to claim 3, wherein the predetermined formula comprises: Wherein, the Representing the lateral difference between two pixels in the second image, wherein the two pixels are points in two sides of a lane line respectively, and the lateral difference is the preset pixel width; Representing a focal length in a preset internal reference matrix; representing a preset width of a lane line under a camera coordinate system; Representing the distance between the camera and the object in the original image in a camera coordinate system.
  5. 5. A method according to claim 3, wherein the determining the coordinates of the frame to be cut according to the coordinates of the lower edge and the preset length and width of the frame to be cut comprises: And under a pixel coordinate system, taking the coincident center point of the second image and the frame to be cut as a reference, and determining the coordinate of the frame to be cut according to the lower edge coordinate and the preset length and width of the frame to be cut.
  6. 6. The method according to claim 1, wherein the performing fusion recognition processing on the plurality of lane lines to be recognized in the fusion image includes: For each lane line corresponding to the first detection image in the fusion image, determining a first coordinate of the lane line corresponding to the first detection image; acquiring partial first coordinates positioned in the frame to be cut and/or outside the frame to be cut; fitting to generate a quadratic curve equation according to the partial first coordinates; for each lane line corresponding to the second detection image in the fusion image, determining a second coordinate of the lane line corresponding to the second detection image; determining a partial second coordinate which is the same as the partial first coordinate in position; If the distance between the part of the second coordinates and the quadratic curve equation is smaller than a preset threshold value, determining that the lane line in the second detection image after the reduction of the size and the lane line in the first detection image after the reduction of the size are the same lane line.
  7. 7. A lane line recognition processing apparatus, comprising: The system comprises an acquisition unit, a detection unit and a control unit, wherein the acquisition unit is used for acquiring an original image in front of a road, and the original image comprises a plurality of lane lines to be identified; The scaling unit is used for scaling the original image to obtain a reduced first image; the first detection unit is used for inputting the first image into a preset resize model to detect lane lines so as to obtain a first detection image; The system comprises a clipping unit, a clipping unit and a processing unit, wherein the clipping unit is used for clipping the original image to obtain a clipped second image, and a frame to be clipped according to the clipping is determined based on the principle of ensuring that the lane line has a preset pixel width in the second image; the second detection unit is used for inputting the second image into a preset loop model to detect lane lines so as to obtain a second detection image; The preset reseze model and the preset crop model are deep learning networks designed in a mode of back bone+head, the back bone adopts ERFNet, the head fixedly outputs 6 lane lines of different types, and the acquisition modes of images input by the preset reseze model and the preset crop model are different; the identification unit is used for carrying out fusion identification processing on the first detection image and the second detection image to obtain a target detection result about the lane line to be identified; the identification unit includes: The restoration module is used for restoring the first detection image and the second detection image to the size of an original image; The fusion module is used for fusing the first detection image after the size reduction and the second detection image after the size reduction to obtain a fusion image; The recognition module is used for carrying out fusion recognition processing on a plurality of lane lines to be recognized in the fusion image; And the replacing module is used for replacing the lane line in the first detection image after the reduction of the size with the lane line in the second detection image after the reduction of the size if the lane line in the second detection image after the reduction of the size is the same lane line as the lane line in the first detection image after the reduction of the size, so as to obtain a target detection result about the lane line to be identified.
  8. 8. A vehicle terminal comprising a memory, a processor, the memory having stored therein a computer program executable on the processor, the processor implementing the method of any of the preceding claims 1-6 when the computer program is executed.
  9. 9. A computer readable storage medium having stored therein computer executable instructions which when executed by a processor are adapted to carry out the method of any one of claims 1-6.

Description

Lane line identification processing method and device and vehicle terminal Technical Field The present application relates to vehicle driving technologies, and in particular, to a lane line recognition processing method and apparatus, and a vehicle terminal. Background At present, the accuracy of lane line detection is particularly important in an automatic driving system, and particularly the accuracy of far-end lane line detection. It has an important role in achieving safety, stability and practicality of autopilot and directly influences navigation, position determination and interaction with the surrounding environment of the vehicle. Therefore, it is necessary to ensure the accuracy of lane line detection. In the prior art, a model is generally used to detect lane lines and output point set information of the lane lines. However, the lane line is shown in the image domain as the lane line is closer to the vanishing point, the fewer pixels occupied by the lane line, and when the lane line is detected, the original image is inferred and output after the resolution is needed to reach the input size corresponding to the model, so that the originally sparse characteristic of the far end is fewer, the expressive capacity of the model is seriously affected, and the detection precision of the far end point is far worse than that of the near end point. Thus, a method that can improve the detection accuracy of the far-end lane line is demanded. Disclosure of Invention The application provides a lane line identification processing method and device and a vehicle terminal, which are used for solving the technical problem of lower detection precision of a far-end lane line. In a first aspect, the present application provides a lane line recognition processing method, including: collecting an original image in front of a road, wherein the original image comprises a plurality of lane lines to be identified; the original image is subjected to scaling treatment to obtain a reduced first image, the first image is input into a preset resolution model to perform lane line detection to obtain a first detection image, the original image is subjected to cutting treatment to obtain a cut second image, and the second image is input into a preset crop model to perform lane line detection to obtain a second detection image; And carrying out fusion recognition processing on the first detection image and the second detection image to obtain a target detection result about the lane line to be recognized. Further, the collecting the original image in front of the road includes: And acquiring an original image in front of the road according to a camera preset at the vehicle terminal. Further, the clipping processing is performed on the original image to obtain a clipped second image, including: Determining the distance between the camera and a target object in the original image according to a preset formula, wherein the target object is a lane line with a preset pixel width; determining the lower edge coordinates of the frame to be cut according to the distance; Determining coordinates of the frame to be cut according to the lower edge coordinates and the preset length and width of the frame to be cut; And cutting the original image according to the coordinates of the frame to be cut to obtain a cut second image. Further, the preset formula includes: Wherein Deltau represents a lateral difference between two pixels in the second image, the two pixels are points in two sides of the lane line, the lateral difference is a preset pixel width, f represents a focal length in a preset internal reference matrix, deltaX represents a preset width of the lane line in a camera coordinate system, and Z represents a distance between the camera and a target object in the original image in the camera coordinate system. Further, the determining the coordinates of the frame to be cut according to the coordinates of the lower edge and the preset length and width of the frame to be cut includes: And under a pixel coordinate system, taking the coincident center point of the second image and the frame to be cut as a reference, and determining the coordinate of the frame to be cut according to the lower edge coordinate and the preset length and width of the frame to be cut. Further, the fusing and identifying the first detection image and the second detection image to obtain a target detection result about the lane line to be identified includes: Restoring the first detection image and the second detection image to the size of an original image; Fusing the first detection image after the size reduction and the second detection image after the size reduction to obtain a fused image; Carrying out fusion recognition processing on a plurality of lane lines to be recognized in the fusion image; if the lane line in the second detection image after the reduction size is the same lane line as the lane line in the first detection image after the reduction size, r