Search

US-20260127761-A1 - VEHICLE CONTROL APPARATUS AND VEHICLE CONTROL METHOD

US20260127761A1US 20260127761 A1US20260127761 A1US 20260127761A1US-20260127761-A1

Abstract

A vehicle control apparatus includes first and second camera, and a processor that recognizes the first and second images to assist driving of a vehicle. The processor is configured to predict a movement of a reference point belonging to the first image captured at a first timing based on behavior information of the vehicle to determine an estimated position of the reference point at a second timing, determine a slope of a road surface around the vehicle based on a difference between the estimated position and a detected position of the reference point in the first image captured at the second timing, and when the slope of the road surface is greater than or equal to a predetermined level, perform calibration with respect to a difference between first position coordinates of the reference point in the first image and second position coordinates of the reference point in the second image.

Inventors

  • Seok Beom Song
  • Deuk Hyeon Kim

Assignees

  • HYUNDAI MOTOR COMPANY
  • KIA CORPORATION

Dates

Publication Date
20260507
Application Date
20250521
Priority Date
20241104

Claims (20)

  1. 1 . A vehicle control apparatus comprising: a first camera obtaining a first image in a range of a first field of view; a second camera obtaining a second image in a range of a second field of view; and a processor operatively connected to the first camera and the second camera and configured to recognize the first and second images to assist driving of a vehicle, wherein the processor is configured to: predict a movement of a reference point belonging to the first image captured at a first timing based on behavior information of the vehicle to determine an estimated position of the reference point at a second timing; determine a slope of a road surface around the vehicle based on a difference between the estimated position and a detected position of the reference point in the first image captured at the second timing; and based on that the slope of the road surface is greater than or equal to a predetermined level, perform calibration with respect to a difference between first position coordinates of the reference point in the first image and second position coordinates of the reference point in the second image.
  2. 2 . The vehicle control apparatus of claim 1 , wherein the processor is further configured to: convert the first image into a top-view image; and determine specified coordinates in the top-view image as the reference point.
  3. 3 . The vehicle control apparatus of claim 1 , wherein the behavior information of the vehicle includes a yaw rate of the vehicle and a speed of the vehicle.
  4. 4 . The vehicle control apparatus of claim 3 , wherein the processor is further configured to determine the estimated position based on a rotation matrix based on the yaw rate and a motion transformation matrix based on the speed.
  5. 5 . The vehicle control apparatus of claim 1 , wherein the processor is further configured to: count a number of errors in which a difference between the estimated position and the detected position of the reference point in the first image, obtained at the second timing, falls within a predetermined range; and determine the slope of the road surface based on that the number of errors exceeds a threshold count.
  6. 6 . The vehicle control apparatus of claim 5 , wherein the processor is further configured to reduce the threshold count as a speed of the vehicle decreases.
  7. 7 . The vehicle control apparatus of claim 1 , wherein the processor is further configured to determine a mismatch between the first position coordinates and the second position coordinates based on a difference between a height value of the first position coordinates and a height value of the second position coordinates.
  8. 8 . The vehicle control apparatus of claim 7 , wherein the processor is further configured to: obtain first normalized coordinates based on the first position coordinates; obtain second normalized coordinates based on the second position coordinates; and determine the mismatch between the first position coordinates and the second position coordinates based on a height error between a first actual height of first world coordinates obtained by converting the first normalized coordinates and a second actual height of second world coordinates obtained by converting the second normalized coordinates.
  9. 9 . The vehicle control apparatus of claim 1 , wherein the processor is further configured to determine calibrated coordinates of the reference point based on a weighted average by a distance between the first position coordinates and the second position coordinates.
  10. 10 . The vehicle control apparatus of claim 9 , wherein the processor is further configured to determine a third axis coordinate of the calibrated coordinates based on a distance difference between the first position coordinates and the second position coordinates in a plane determined by a first axis and a second axis.
  11. 11 . A vehicle control method comprising: predicting, by a processor, a movement of a reference point belonging to a first image captured by a first camera operatively connected to the processor at a first timing based on behavior information of a vehicle to determine an estimated position of the reference point at a second timing; determining, by the processor, a slope of a road surface around the vehicle based on a difference between the estimated position and a detected position of the reference point in the first image captured at the second timing; and based on that the slope of the road surface is greater than or equal to a predetermined level, performing, by the processor, calibration with respect to a difference between first position coordinates of the reference point in the first image and second position coordinates of the reference point in a second image captured by a second camera operatively connected to the processor to generate calibrated coordinates.
  12. 12 . The vehicle control method of claim 11 , wherein the determining of the estimated position of the reference point at the second timing includes: converting the first image into a top-view image; and determining specified coordinates in the top-view image as the reference point.
  13. 13 . The vehicle control method of claim 11 , wherein the behavior information of the vehicle includes a yaw rate of the vehicle and a speed of the vehicle.
  14. 14 . The vehicle control method of claim 13 , wherein the determining of the estimated position of the reference point at the second timing includes determining the estimated position based on a rotation matrix based on the yaw rate and a motion transformation matrix based on the speed.
  15. 15 . The vehicle control method of claim 11 , wherein the determining of the slope of the road surface includes: counting a number of errors in which a difference between the estimated position and the detected position of the reference point in the first image, obtained at the second timing, falls within a predetermined range; and determining the slope of the road surface based on that the number of errors exceeds a threshold count.
  16. 16 . The vehicle control method of claim 15 , wherein the counting of the number of errors includes reducing the threshold count as a speed of the vehicle decreases.
  17. 17 . The vehicle control method of claim 11 , wherein the generating of the calibrated coordinates includes determining a mismatch between the first position coordinates and the second position coordinates based on a difference between a height value of the first position coordinates and a height value of the second position coordinates.
  18. 18 . The vehicle control method of claim 17 , wherein the determining of the mismatch between the first position coordinates and the second position coordinates includes: obtaining first normalized coordinates based on the first position coordinates; obtaining second normalized coordinates based on the second position coordinates; and determining the mismatch between the first position coordinates and the second position coordinates based on a height error between a first actual height of first world coordinates obtained by converting the first normalized coordinate and a second actual height of second world coordinates obtained by converting the second normalized coordinates.
  19. 19 . The vehicle control method of claim 11 , wherein the generating of the calibrated coordinates includes generating calibrated coordinates of the reference point based on a weighted average by a distance between the first position coordinates and the second position coordinates.
  20. 20 . The vehicle control method of claim 19 , wherein the generating of the calibrated coordinates includes determining a third axis coordinate of the calibrated coordinates based on a distance difference between the first position coordinates and the second position coordinates in a plane determined by a first axis and a second axis.

Description

CROSS-REFERENCE TO RELATED APPLICATION The present application claims priority to Korean Patent Application No. 10-2024-0154581, filed on Nov. 4, 2024, the entire contents of which is incorporated herein for all purposes by this reference. BACKGROUND OF THE PRESENT DISCLOSURE Field of the Present Disclosure The present disclosure relates to a vehicle control apparatus and a method thereof, and to a technique for determining the slope of a road surface based on images obtained by a vehicle. Description of Related Art An autonomous vehicle refers to a vehicle capable of operating independently without any input from a driver or passenger. The autonomous driving system (Automated Vehicle & Highway Systems) refers to a system that monitors and controls such autonomous vehicles to enable their self-driving operation. Additionally, technologies have been proposed to monitor the exterior of the vehicle and operate various driving assistance systems based on the monitored external environment to assist the driver. Autonomous vehicles or vehicles equipped with driving assistance systems may utilize methods of performing artificial intelligence learning with images of the vehicle's surroundings as a means of monitoring the surroundings of the vehicle. For example, a camera mounted on a vehicle may be used to obtain images of the surroundings of the vehicle, and artificial intelligence learning is performed with the obtained images. The artificial intelligence network used to perform learning with images may perform object detection, semantic segmentation, depth map estimation, lane detection, or the like, depending on the purpose. To minimize blind spots during the process of capturing images of the external environment, vehicles may utilize two or more cameras. The use of two or more cameras to obtain images of the external environment may result in overlapping areas where the angles of view of different cameras overlap. Within an overlapping area, the same object needs to be determined to match the same coordinates. However, when there is a slope on the road surface, the same object may be represented by different coordinates in cameras. When the coordinate values obtained by different cameras are different, the accuracy of image recognition is reduced and errors in autonomous driving or driving assistance systems based on image recognition may occur. The information included in this Background of the present disclosure is only for enhancement of understanding of the general background of the present disclosure and may not be taken as an acknowledgement or any form of suggestion that this information forms the prior art already known to a person skilled in the art. BRIEF SUMMARY Various aspects of the present disclosure are directed to providing a vehicle control apparatus and method capable of enhancing the accuracy of image recognition-based autonomous driving or driver assistance systems. Various aspects of the present disclosure are directed to providing a vehicle control apparatus and method capable of improving the reliability of image recognition by more accurately matching images obtained from a plurality of cameras. The technical problems to be solved by the present disclosure are not limited to the aforementioned problems, and any other technical problems not mentioned herein will be clearly understood from the following description by those skilled in the art to which the present disclosure pertains. According to an aspect of the present disclosure, a vehicle control apparatus includes a first camera that obtains a first image in a range of a first field of view, a second camera that obtains a second image in a range of a second field of view, and a processor that recognizes the first and second images to assist driving of a vehicle. The processor is configured to predict a movement of a reference point belonging to the first image captured at a first timing based on behavior information of the vehicle to determine an estimated position of the reference point at a second timing, determine a slope of a road surface around the vehicle based on a difference between the estimated position and a detected position of the reference point in the first image captured at the second timing, and when the slope of the road surface is greater than or equal to a predetermined level, perform calibration with respect to a difference between first position coordinates of the reference point in the first image and second position coordinates of the reference point in the second image. In an exemplary embodiment of the present disclosure, the processor may convert the first image into a top-view image, and determine specified coordinates in the top-view image as the reference point. In an exemplary embodiment of the present disclosure, the behavior information of the vehicle may include a yaw rate of the vehicle and a speed of the vehicle. In an exemplary embodiment of the present disclosure, the processor is configured to