Search

CN-121999185-A - AR-HUD picture output method, device and calibration method

CN121999185ACN 121999185 ACN121999185 ACN 121999185ACN-121999185-A

Abstract

The invention discloses an AR-HUD picture output method, an AR-HUD picture output device and an AR-HUD picture calibration method. The output method comprises the steps of obtaining a coordinate P_level of a virtual object in a vehicle coordinate system, converting the coordinate P_level into a coordinate P_level_level in the observation coordinate system through an observation matrix, constructing the observation coordinate system through an eye position coordinate P_level of a driver in real time in the vehicle coordinate system, converting the coordinate P_level_level into a homogeneous coordinate P_level of a clipping space through a projection matrix, constructing the projection matrix based on fixed geometric parameters of an AR-HUD, converting the homogeneous coordinate P_level into a normalized device coordinate P_NDC in the normalized device coordinate system, mapping the normalized device coordinate P_NDC to a rendering pixel coordinate, and driving the AR-HUD projector to output an AR-HUD picture according to the rendering pixel coordinate. According to the invention, the eye position coordinates of the driver are tracked in real time, the rendering parameters of the AR-HUD picture are dynamically adjusted, the alignment of virtual information and a real road scene is ensured, and the navigation accuracy and the driving safety are improved.

Inventors

  • WANG ZHIQUAN
  • ZOU LIBAO
  • ZHANG YI
  • Liang Chencheng

Assignees

  • 武汉江夏楚能汽车技术研发有限公司

Dates

Publication Date
20260508
Application Date
20260114

Claims (10)

  1. 1. An AR-HUD picture output method is characterized in that, The method of outputting includes the steps of, Acquiring a coordinate P_vehicle of a virtual object under a vehicle coordinate system; Converting the coordinate P_vehicle into a coordinate P_vehicle_set under an observation coordinate system through an observation matrix, wherein the observation coordinate system is constructed through an eye position coordinate P_eye of a driver in real time under the vehicle coordinate system; Converting the coordinate P_mask_see into a homogeneous coordinate P_clip of a clipping space through a projection matrix, wherein the projection matrix is constructed based on fixed geometric parameters of the AR-HUD; Converting the homogeneous coordinate P_clip into a normalized device coordinate P_NDC under a normalized device coordinate system; Mapping the normalized device coordinates p_ndc to rendered pixel coordinates; And driving the AR-HUD projector to output an AR-HUD picture according to the rendering pixel coordinates.
  2. 2. The AR-HUD picture output method according to claim 1, wherein, Constructing the observation matrix includes the steps of, Taking the real-time eye position coordinates as camera positions; Selecting an observation target point aligned with the heading in front of the vehicle; taking the Z-axis direction of the vehicle coordinate system as an upward vector; and calculating to obtain the observation matrix according to the camera position, the observation target point and the upward vector.
  3. 3. The AR-HUD picture output method according to claim 1, wherein, Acquiring the eye position coordinates P _ eye includes, Acquiring eye data of a driver through a video stream of the binocular vision equipment; and calculating to obtain an eye position coordinate P_eye under the vehicle coordinate system according to the eye data.
  4. 4. The AR-HUD picture output method of claim 3, wherein, Acquiring the eye position coordinates P _ eye includes, Acquiring a synchronous video stream of the binocular vision equipment, wherein the synchronous video stream comprises a left image and a right image of at least one frame; Acquiring conjugate point pairs of eyes of a driver on the left image and the right image; Acquiring a three-dimensional coordinate P_camera of the eyes of the driver under a camera coordinate system according to the conjugate point pair and the calibration parameters of the binocular vision equipment; and converting the three-dimensional coordinate P_camera into an eye position coordinate P_eye under the vehicle coordinate system through a conversion matrix.
  5. 5. The AR-HUD picture output method of claim 4 wherein, The acquisition of the conjugate point pair includes, Acquiring at least one eye feature point of a driver in the left image and the right image; and obtaining the eye feature points of which the left image and the right image are the same as the conjugate point pair through key point matching.
  6. 6. The AR-HUD picture output method of claim 4 wherein, Acquiring the three-dimensional coordinates P _ camera includes, And calculating to obtain the three-dimensional coordinate P_camera under a camera coordinate system according to the coordinates of the conjugate point pair, the parallax, the focal length of the binocular vision equipment, the base line distance and the main point coordinate.
  7. 7. The AR-HUD picture output method according to claim 1, wherein, Constructing the projection matrix includes the steps of, Acquiring fixed geometric parameters of the HUD projection surface under the vehicle coordinate system; constructing a view cone according to the eye position coordinate P_eye and the fixed geometric parameter; and constructing the projection matrix according to the view cone.
  8. 8. The AR-HUD picture output method according to claim 1, wherein, Converting the homogeneous coordinate p_clip to the normalized device coordinate p_ndc includes, And performing perspective division on the homogeneous coordinate P_clip to obtain the normalized equipment coordinate P_NDC.
  9. 9. An AR-HUD picture output device is characterized in that, The output device comprises an eye position module, a projection module and a rendering module; The eye position module is used for acquiring a coordinate P_vehicle under a vehicle coordinate system; the projection module is used for converting the coordinate P_vehicle into a coordinate P_vehicle_set under an observation coordinate system through an observation matrix, and the observation coordinate system is constructed through an eye position coordinate P_eye of a driver under the vehicle coordinate system in real time; converting the coordinate P_mask_see into a homogeneous coordinate P_clip of a clipping space through a projection matrix, wherein the projection matrix is constructed based on fixed geometric parameters of the AR-HUD; The rendering module is used for mapping the normalized equipment coordinate P_NDC to a rendering pixel coordinate, and driving the AR-HUD projector to output an AR-HUD picture according to the rendering pixel coordinate.
  10. 10. A method for calibrating an AR-HUD image is characterized in that, The method of calibration may include the steps of, Dynamically acquiring an eye position coordinate P_eye of a driver under a vehicle coordinate system; updating an observation matrix according to the eye position coordinate P_eye; converting the coordinate P_vehicle of the virtual object in a vehicle coordinate system into the coordinate P_vehicle_set in the observation coordinate system through the observation matrix; Converting the coordinate P_mask_see into a homogeneous coordinate P_clip of a clipping space through a projection matrix, wherein the projection matrix is constructed based on fixed geometric parameters of the AR-HUD; Converting the homogeneous coordinate P_clip into a normalized device coordinate P_NDC under a normalized device coordinate system; Mapping the normalized device coordinates p_ndc to rendered pixel coordinates; And driving the AR-HUD projector to output an AR-HUD picture according to the rendering pixel coordinates.

Description

AR-HUD picture output method, device and calibration method Technical Field The invention relates to the technical field of intelligent cabins, in particular to an AR-HUD picture output method, device and calibration method based on driver sight tracking. Background Augmented reality head-up display (AR-HUD) technology aims to fuse virtual information (such as navigation arrows) with a real road scene to promote driving convenience and safety. However, the conventional AR-HUD system relies on a preset fixed driver eye position (Eyebox), and when the driver's head moves to cause an eye position deviation, the virtual image may generate parallax drift, misleading the driver. The existing solutions such as expansion Eyebox or static calibration have the problems of large system volume, high cost, incapability of adapting to dynamic gesture changes and the like. Disclosure of Invention In view of the above, the first aspect of the present invention discloses an AR-HUD picture output method, The method of outputting includes the steps of, Acquiring a coordinate P_vehicle of a virtual object under a vehicle coordinate system; Converting the coordinate P_vehicle into a coordinate P_vehicle_set under an observation coordinate system through an observation matrix, wherein the observation coordinate system is constructed through an eye position coordinate P_eye of a driver in real time under the vehicle coordinate system; Converting the coordinate P_mask_see into a homogeneous coordinate P_clip of a clipping space through a projection matrix, wherein the projection matrix is constructed based on fixed geometric parameters of the AR-HUD; Converting the homogeneous coordinate P_clip into a normalized device coordinate P_NDC under a normalized device coordinate system; Mapping the normalized device coordinates p_ndc to rendered pixel coordinates; And driving the AR-HUD projector to output an AR-HUD picture according to the rendering pixel coordinates. In the present invention, Constructing the observation matrix includes the steps of, Taking the real-time eye position coordinates as camera positions; Selecting an observation target point aligned with the heading in front of the vehicle; taking the Z-axis direction of the vehicle coordinate system as an upward vector; and calculating to obtain the observation matrix according to the camera position, the observation target point and the upward vector. In the present invention, Acquiring the eye position coordinates P _ eye includes, Acquiring eye data of a driver through a video stream of the binocular vision equipment; and calculating to obtain an eye position coordinate P_eye under the vehicle coordinate system according to the eye data. In the present invention, Acquiring the eye position coordinates P _ eye includes, Acquiring a synchronous video stream of the binocular vision equipment, wherein the synchronous video stream comprises a left image and a right image of at least one frame; Acquiring conjugate point pairs of eyes of a driver on the left image and the right image; Acquiring a three-dimensional coordinate P_camera of the eyes of the driver under a camera coordinate system according to the conjugate point pair and the calibration parameters of the binocular vision equipment; and converting the three-dimensional coordinate P_camera into an eye position coordinate P_eye under the vehicle coordinate system through a conversion matrix. In the present invention, The acquisition of the conjugate point pair includes, Acquiring at least one eye feature point of a driver in the left image and the right image; and obtaining the eye feature points of which the left image and the right image are the same as the conjugate point pair through key point matching. In the present invention, Acquiring the three-dimensional coordinates P _ camera includes, And calculating to obtain the three-dimensional coordinate P_camera under a camera coordinate system according to the coordinates of the conjugate point pair, the parallax, the focal length of the binocular vision equipment, the base line distance and the main point coordinate. In the present invention, Constructing the projection matrix includes the steps of, Acquiring fixed geometric parameters of the HUD projection surface under the vehicle coordinate system; constructing a view cone according to the eye position coordinate P_eye and the fixed geometric parameter; and constructing the projection matrix according to the view cone. In the present invention, Converting the homogeneous coordinate p_clip to the normalized device coordinate p_ndc includes, And performing perspective division on the homogeneous coordinate P_clip to obtain the normalized equipment coordinate P_NDC. And, a second aspect of the present invention discloses an AR-HUD picture calibration apparatus, The output device comprises an eye position module, a projection module and a rendering module; The eye position module is used for acquiring a coordinate P