Search

EP-4204930-B1 - AUTOMATIC POSITIONING OF HEAD-UP DISPLAY BASED ON GAZE TRACKING

EP4204930B1EP 4204930 B1EP4204930 B1EP 4204930B1EP-4204930-B1

Inventors

  • WIGGESHOFF, ELKE

Dates

Publication Date
20260513
Application Date
20210817

Claims (14)

  1. A method for placement of a head-up display, HUD, comprising: rendering a view of a virtual environment for display to a user; tracking (502) a gaze of the user as the user engages in interactivity with the view of the virtual environment, wherein tracking the gaze of the user generates gaze data, the gaze data identifying locations within the view that the gaze of the user is directed towards during the interactivity; using the gaze data to determine a preferred location (506) for positioning of a HUD in the view, which includes: analyzing the gaze data to identify a preferred gaze region within the view for the user; and determining the preferred location for position of the HUD in the view based on the identified preferred gaze region; and positioning the HUD in the view at the preferred location.
  2. The method of claim 1, wherein the method is for arranging elements of the head-up display, HUD, wherein the step of using the gaze data comprises using the gaze data to determine a ranked order of predefined locations in the view, the predefined locations configured for respective placement of elements of the HUD in the view, and the step of positioning the HUD comprises positioning the elements of the HUD, respectively, in the predefined locations according to the ranked order.
  3. The method of claim 2, wherein the positioning of the elements of the HUD is configured to place the elements of the HUD in order of importance and according to the ranked order of the predefined locations.
  4. The method of claim 1, wherein identifying the preferred gaze region of the user includes identifying an area of the view where an amount of the gaze of the user exceeds a predefined threshold.
  5. The method of claim 1, wherein determining the preferred location for positioning of the HUD is configured to be outside of the preferred gaze region of the user.
  6. The method of claim 1 or 2, wherein the step of using the gaze data includes determining a centroid of the tracked gaze of the user.
  7. The method of claim 1, wherein positioning the HUD in the view includes moving the HUD from an existing location to the preferred location.
  8. The method of claim 1, comprising: tracking, during the interactivity by the user with the view of the virtual environment, one or more features of the interactivity as the user engages in the interactivity, wherein tracking the one or more features of the interactivity generates feature data; wherein determining the preferred location for positioning of the HUD includes using the feature data.
  9. The method of claim 8, wherein using the feature data to determine the preferred location includes analyzing the feature data to identify patterns indicative of the preferred location for the positioning of the HUD.
  10. The method of claim 9, wherein analyzing the feature data is performed by a machine learning model.
  11. The method of claim 1 or 8, wherein determining the preferred location for positioning of the HUD includes selecting one of a plurality of predefined locations for the positioning of the HUD.
  12. The method of claim 8, wherein the features of the interactivity include one or more of gestures by the user, controller inputs, biometric inputs.
  13. The method of claim 1, 2 or 8, wherein the interactive application is a video game, and wherein the virtual environment is defined for interactive gameplay of the video game.
  14. The method of claim 13, wherein the HUD is configured to display information or statistics relating to the interactive gameplay of the video game.

Description

Field of the Disclosure The present disclosure relates to systems and methods for automatic positioning of a head-up display based on gaze tracking of a user. BACKGROUND Description of the Related Art An area of continued development in the gaming industry is that of multi-player gaming, which is capable of providing collective gaming experiences to players that are geographically remote from each other. An expanding area of the gaming industry is that of sharing gameplay video and spectating gameplay. Users are now able to record and share their gameplay through websites, social media, etc. Furthermore, users may live-stream their gameplay, so that others can view their gameplay as it occurs in substantial real-time. Another current trend in the gaming industry is a move towards cloud gaming. Cloud gaming provides advantages to the end user by enabling remote execution of a video game in a data center where the resources for the video game can be guaranteed. The video generated by the remotely executed video game is streamed to the user's equipment, and inputs from the user are sent back to the data center. This frees the end user from the need to own specific hardware in order to execute the game itself. Rather, the end user need only possess sufficient hardware to stream the gameplay, and may still enjoy a high quality gaming experience. Furthermore, in theory, cloud gaming enables gaming from any location where network connectivity is available. A continuing trend in the video game industry is the increased sophistication of graphics and the availability of computing resources to meet the demands of modern game engines. As video games evolve, their resolutions and frame rates continue to increase, enabling rendering of very realistic and detailed virtual environments. Additionally, the popularity of cloud gaming continues to grow, and the shift to cloud executed video games enables even greater access to high quality gaming experiences. Previously proposed arrangements are disclosed by EP 3 457 251 A1, CA 2 883 560 A1 and US 2020/134867 A1. It is within this context that embodiments of the disclosure arise. SUMMARY OF THE DISCLOSURE The invention is set out in the independent claims. Preferred embodiments are defined by the dependent claims. In some implementations, a method for placement of a head-up display (HUD) is provided, including: rendering a view of a virtual environment for display to a user; tracking a gaze of the user as the user engages in interactivity with the view of the virtual environment, wherein tracking the gaze of the user generates gaze data, the gaze data identifying locations within the view that the gaze of the user is directed towards during the interactivity; using the gaze data to determine a preferred location for positioning of a HUD in the view; positioning the HUD in the view at the preferred location. In some implementations, using the gaze data to determine the preferred location includes analyzing the gaze data to identify a main gaze region of the user. In some implementations, identifying the main gaze region of the user includes identifying an area of the view where an amount of the gaze of the user exceeds a predefined threshold. In some implementations, determining the preferred location for positioning of the HUD is configured to be outside of the main gaze region of the user. In some implementations, using the gaze data to determine a preferred location for positioning of the HUD includes determining a centroid of the tracked gaze of the user. In some implementations, determining the preferred location for positioning of the HUD includes selecting one of a plurality of predefined locations for the positioning of the HUD. In some implementations, positioning the HUD in the view includes moving the HUD from an existing location to the preferred location. In some implementations, the interactive application is a video game, and wherein the virtual environment is defined for interactive gameplay of the video game. In some implementations, the HUD is configured to display information or statistics relating to the interactive gameplay of the video game. In some implementations, a method for placement of a head-up display (HUD) is provided, including: rendering a view of a virtual environment for display to a user; during interactivity by the user with the view of the virtual environment, tracking one or more features of the interactivity as the user engages in the interactivity, wherein tracking the one or more features of the interactivity generates feature data; using the feature data to determine a preferred location for positioning of a HUD in the view; positioning the HUD in the view at the preferred location. In some implementations, using the feature data to determine the preferred location includes analyzing the feature data to identify patterns indicative of the preferred location for the positioning of the HUD. In some implementations, analyzing the feature