Search

CN-122018165-A - Augmented reality head-up display device and method

CN122018165ACN 122018165 ACN122018165 ACN 122018165ACN-122018165-A

Abstract

The invention provides an augmented reality head-up display device and method, wherein the device comprises an acquisition module, a processing module and a fusion module, wherein the acquisition module is used for acquiring parameters of a road scene image, a driver face image and an optical component, the processing module is used for determining target scene information and driver eye position information according to the road scene image and the driver face image and generating an auxiliary information hologram corresponding to a target scene according to the parameters of the optical component, the target scene information and the driver eye position information, and the fusion module is used for fusing the auxiliary information hologram to the road scene through the optical component to perform augmented reality head-up display. The invention can realize accurate virtual-real fusion of auxiliary information and real road scenes at any depth, does not need a mechanical zooming component, has a simple and compact system structure, and is suitable for an intelligent cabin environment.

Inventors

  • CAO LIANGCAI
  • HE ZEHAO
  • GAO YUNHUI
  • LUO YINGJUN

Assignees

  • 清华大学
  • 首都师范大学

Dates

Publication Date
20260512
Application Date
20260205

Claims (10)

  1. 1. An augmented reality head-up display device, comprising: The acquisition module is used for acquiring parameters of the road scene image, the driver face image and the optical components; The processing module is used for determining target scene information and driver eye position information according to the road scene image and the driver face image, and generating an auxiliary information hologram corresponding to the target scene according to the parameters of the optical component, the target scene information and the driver eye position information; And the fusion module is used for fusing the auxiliary information hologram to a road scene through the optical component to perform augmented reality head-up display.
  2. 2. The augmented reality head-up display device of claim 1, wherein the processing module comprises: A first information determining module for dividing a target scene from the road scene image based on a monocular depth estimation algorithm, and determining target scene information, wherein the target scene information comprises intensity distribution of the target scene and three-dimensional coordinates of the target scene; The second information determining module is used for acquiring three-dimensional coordinates of eyes of the driver from the face image of the driver based on a monocular depth estimation algorithm; The hologram generation module is used for generating auxiliary information describing the target scenery according to the intensity distribution of the target scenery, the three-dimensional coordinates of the target scenery and the three-dimensional coordinates of the driver eyes based on a hologram generation algorithm, and generating the auxiliary information hologram according to the auxiliary information, the three-dimensional coordinates of the target scenery, the three-dimensional coordinates of the driver eyes and the parameters of the optical components.
  3. 3. The augmented reality head-up display device of claim 2, wherein the first information determination module is configured to classify the road scene image according to a hybrid feature, the hybrid feature comprising a color, an edge, a contour, a vanishing line, and a vanishing point; and carrying out depth estimation on the distant view image by adopting a horizontal edge gradient method, carrying out depth estimation on the intermediate view image by adopting a vanishing line gradient method, and carrying out depth estimation on the near view image by adopting a weighted depth superposition method to obtain the target scene information.
  4. 4. The augmented reality head-up display device of claim 2, wherein the second information determination module is configured to input the driver facial image into a pre-trained depth gradient-assisted monocular depth estimation network model, and obtain three-dimensional coordinates of the driver eyes output by the monocular depth estimation network model; The monocular depth estimation network model is obtained through training of depth estimation training samples and depth gradient auxiliary samples.
  5. 5. The augmented reality head-up display device of any one of claims 2 to 4, wherein the hologram generation module is to: image feature recognition is carried out based on the intensity distribution, and the type features of the target scenery are determined; generating text or graphic auxiliary information containing type identification according to the type characteristics, and generating numerical auxiliary information containing distance parameters according to the three-dimensional coordinates of the target scenery and depth components in the three-dimensional coordinates of the eyes of the driver; Inputting character or graphic auxiliary information, numerical auxiliary information, three-dimensional coordinates of the target scenery, three-dimensional coordinates of eyes of the driver and parameters of the optical components into a pre-trained model without convolution errors to drive a self-coding deep learning network, and obtaining an auxiliary information hologram output by the self-coding deep learning network; The self-coding deep learning network compensates coding phase convolution errors in a decoding stage in a phase expansion mode.
  6. 6. The augmented reality head-up display device of claim 5, wherein the fusion module comprises: A light source module for generating collimated polarized illumination light; The optical modulation module is arranged at the output end of the light source module, the input end of the optical modulation module is electrically connected with the output end of the hologram generating module and is used for loading the auxiliary information hologram, diffracting the auxiliary information hologram after the polarized illumination light irradiates the auxiliary information hologram, and projecting the auxiliary information hologram to a preset position adjacent to a target scene; The display module is arranged at the diffraction light wave output end of the optical modulation module and used for reflecting the diffraction light waves to the eye area of the driver so as to perform virtual-real fusion on the auxiliary information hologram and the road scene; and the parameters of the optical components of the light source module, the optical modulation module and the display module are used for fine tuning the calculated parameters of the auxiliary information hologram.
  7. 7. The augmented reality head-up display device of claim 6, wherein the light source module comprises: The laser light source is used for emitting divergent spherical waves; The collimating lens is arranged at the output end of the laser light source and is used for converging the divergent spherical waves and outputting collimated plane waves; the polarizing plate is arranged at the output end of the collimation plane wave of the collimation lens, and the output end of the polarizing plate is used as the output end of the light source module and used for changing the polarization state of the collimation plane wave and outputting the collimated polarized illumination light.
  8. 8. The augmented reality head-up display device of claim 6, wherein the optical modulation module is a spatial light modulator for loading a phase hologram corresponding to the auxiliary information, the phase hologram for changing a surface phase distribution of the spatial light modulator.
  9. 9. An augmented reality head-up display method, comprising: Acquiring parameters of a road scene image, a driver face image and optical components; Determining target scene information and driver eye position information according to the road scene image and the driver face image, and generating an auxiliary information hologram corresponding to a target scene according to parameters of the optical component, the target scene information and the driver eye position information; and fusing the auxiliary information hologram to a road scene through the optical component, and performing augmented reality head-up display.
  10. 10. The augmented reality head-up display method of claim 9, wherein determining target scene information and driver eye position information from the road scene image and the driver face image and generating an auxiliary information hologram corresponding to a target scene from parameters of the optical component, the target scene information and the driver eye position information comprises: Dividing a target scene from the road scene image based on a monocular depth estimation algorithm, and determining target scene information, wherein the target scene information comprises the intensity distribution of the target scene and the three-dimensional coordinates of the target scene; acquiring three-dimensional coordinates of eyes of a driver from the face image of the driver based on a monocular depth estimation algorithm; Generating auxiliary information describing the target scenery according to the intensity distribution of the target scenery, the three-dimensional coordinates of the target scenery and the three-dimensional coordinates of the eyes of the driver based on a hologram generating algorithm, and generating an auxiliary information hologram according to the auxiliary information, the three-dimensional coordinates of the target scenery, the three-dimensional coordinates of the eyes of the driver and the parameters of the optical components.

Description

Augmented reality head-up display device and method Technical Field The invention relates to the technical field of display, in particular to an augmented reality head-up display device and method. Background The Head-Up Display (HUD) technology enables a driver to acquire key driving information without Head lowering by projecting driving information into a field of view in front of the driver, and driving safety is remarkably improved. The augmented reality (Augmented Reality, AR) head-up display technology is further used for providing more visual and rich driving assistance functions for a driver by carrying out virtual-real fusion on virtual assistance information and a real road scene. The existing augmented reality head-up display device mainly uses single-focal plane or double-focal plane projection, can cover a limited depth range, and has poor virtual-real fusion capability. In order to realize multi-depth coverage, the prior proposal generally needs to add a mechanical zooming component or a high-precision optical compensation element, thereby not only improving the engineering integration difficulty of the device, but also reducing the reliability of long-term use. In addition, few schemes adopting oblique projection technology can construct a continuously variable focal plane, but generally only can enable a navigation arrow to be attached to the ground, and accurate virtual-real fusion with other real scenes (such as road side trees, vehicles in front of the road and the like) is difficult to realize. Disclosure of Invention The invention provides an augmented reality head-up display device and method, which are used for solving the defects that the existing head-up display device has limited focal plane number, can not realize virtual-real fusion with real scenes with different depths, and has complex system structure and low reliability. The invention can realize accurate virtual-real fusion of auxiliary information and real road scenes at any depth, does not need a mechanical zooming component, has a simple and compact system structure, and is suitable for an intelligent cabin environment. The invention provides an augmented reality head-up display device which comprises an acquisition module, a processing module and a fusion module, wherein the acquisition module is used for acquiring parameters of a road scene image, a driver face image and an optical component, the processing module is used for determining target scene information and driver eye position information according to the road scene image and the driver face image and generating an auxiliary information hologram corresponding to a target scene according to the parameters of the optical component, the target scene information and the driver eye position information, and the fusion module is used for fusing the auxiliary information hologram to a road scene through the optical component to perform augmented reality head-up display. The augmented reality head-up display device comprises a first information determining module, a second information determining module and a hologram generating module, wherein the first information determining module is used for dividing a target scene from a road scene image based on a monocular depth estimation algorithm to determine target scene information, the target scene information comprises intensity distribution of the target scene and three-dimensional coordinates of the target scene, the second information determining module is used for acquiring three-dimensional coordinates of eyes of a driver from a face image of the driver based on the monocular depth estimation algorithm, the hologram generating module is used for generating auxiliary information describing the target scene according to the intensity distribution of the target scene, the three-dimensional coordinates of the target scene and the three-dimensional coordinates of the eyes of the driver based on a hologram generating algorithm, and generating an auxiliary information hologram according to the auxiliary information, the three-dimensional coordinates of the target scene, the three-dimensional coordinates of the eyes of the driver and parameters of optical components. The augmented reality head-up display device is characterized in that a first information determining module is used for classifying road scene images according to mixed features, wherein the mixed features comprise colors, edges, outlines, vanishing lines and vanishing points, image categories comprise distant view images, middle view images and near view images, a horizontal edge gradient method is used for carrying out depth estimation on the distant view images, a vanishing line gradient method is used for carrying out depth estimation on the middle view images, and a weighted depth superposition method is used for carrying out depth estimation on the near view images, so that target scene information is obtained. According to the augmented reality head-up