US-12625375-B1 - Dual screen extended reality headset device
Abstract
Methods and apparatuses are described for a dual screen extended reality (XR) headset device, including a first display screen viewable by a wearer of the headset device and a second display screen on an external surface of the headset device and viewable by other persons in proximity to the wearer. The headset device generates a virtual environment for display to the wearer via the first display screen. The headset device captures data associated with execution of the software application, data associated with the virtual environment being displayed to the wearer, and data associated with the wearer. The headset device displays a user interface based upon at least a portion of the captured data on the second display screen, and detects input associated with the user interface displayed on the second display screen. The headset device updates the virtual environment and access to software features based upon the detected input.
Inventors
- Hangyu Wang
- Yiming Li
- Sai Priya Jyothula
Assignees
- FMR LLC
Dates
- Publication Date
- 20260512
- Application Date
- 20250728
Claims (20)
- 1 . An extended reality (XR) headset device comprising: a first display screen embedded in the headset device and viewable by a wearer of the headset device; a second display screen integrated into an external surface of the headset device and viewable by one or more other persons in proximity to the wearer of the headset device; a memory that stores computer-executable instructions; and a processor that executes the computer-executable instructions to: execute a software application to generate a virtual environment and display the virtual environment to the wearer of the headset device via the first display screen; capture one or more of (i) data associated with execution of the software application, (ii) data associated with the virtual environment being displayed to the wearer, and (iii) data associated with the wearer of the headset device; generate a user interface based upon at least a portion of the captured data and display the user interface on the second display screen; detect input associated with the user interface displayed on the second display screen; and update one or more of (i) the virtual environment and (ii) access to one or more features of the software application based upon the detected input, wherein the first display screen and the second display screen are part of a single XR headset device.
- 2 . The headset device of claim 1 , wherein the data associated with execution of the software application comprises a frame rate associated with the software application, processor usage associated with the software application, memory usage associated with the software application, network bandwidth usage associated with the software application, a version number of the software application, a time remaining associated with the software application, an application mode of the software application, a user role associated with the software application, a narrative progress of the software application, a state of the software application, and an error message associated with the software application.
- 3 . The headset device of claim 1 , wherein data associated with the virtual environment being displayed to the wearer comprises indicia associated with one or more objects in the virtual environment, indicia associated with user position in the virtual environment, and indicia associated with user orientation in the virtual environment.
- 4 . The headset device of claim 1 , wherein data associated with the wearer of the headset device comprises input submitted by the wearer of the headset device to an interface of the headset device.
- 5 . The headset device of claim 4 , wherein the interface of the headset device comprises a microphone and the input submitted by the wearer comprises a spoken phrase.
- 6 . The headset device of claim 4 , wherein the interface of the headset device comprises a user interface element displayed on the first display screen and the input submitted by the wearer comprises an interaction with the user interface element.
- 7 . The headset device of claim 1 , wherein the user interface displayed on the second display screen comprises at least a portion of a screen capture of the virtual environment as displayed to the wearer of the headset device via the first display screen.
- 8 . The headset device of claim 7 , wherein the user interface displayed on the second display screen comprises a user interface element requesting input from the one or more other persons in proximity to the wearer of the headset device.
- 9 . The headset device of claim 1 , wherein the user interface displayed on the second display screen comprises status indicia associated with one or more of the software application, the virtual environment, and the wearer of the headset device.
- 10 . The headset device of claim 1 , wherein the input associated with the user interface displayed on the second display screen comprises one or more of a touch interaction and a scan interaction with the second display screen by either the wearer of the headset device or the one or more other persons in proximity to the wearer of the headset device.
- 11 . The headset device of claim 10 , wherein updating access to one or more features of the software application based upon the detected input comprises: authenticating the wearer of the headset device based upon the touch interaction or the scan interaction; and enabling access to the one or more features of the software application based upon the authentication of the wearer.
- 12 . The headset device of claim 1 , wherein updating access to one or more features of the software application based upon the detected input comprises modifying one or more objects in the virtual environment.
- 13 . A computerized method, comprising: executing, by an extended reality (XR) headset device, a software application to generate a virtual environment and display the virtual environment via a first display screen embedded in the headset device and viewable by a wearer of the headset device; capturing, by the headset device, one or more of (i) data associated with execution of the software application, (ii) data associated with the virtual environment being displayed to the wearer, and (iii) data associated with the wearer of the headset device; generating, by a second display screen integrated into an exterior surface of the headset device and viewable by one or more other persons in proximity to the wearer of the headset device, a user interface based upon at least a portion of the captured data and displaying the user interface on the second display screen; detecting, by the headset device, input associated with the user interface displayed on the second display screen; and updating, by the headset device, one or more of (i) the virtual environment and (ii) access to one or more features of the software application based upon the detected input, wherein the first display screen and the second display screen are part of a single XR headset device.
- 14 . The method of claim 13 , wherein the data associated with execution of the software application comprises a frame rate associated with the software application, processor usage associated with the software application, memory usage associated with the software application, network bandwidth usage associated with the software application, a version number of the software application, a time remaining associated with the software application, an application mode of the software application, a user role associated with the software application, a narrative progress of the software application, a state of the software application, and an error message associated with the software application.
- 15 . The method of claim 13 , wherein data associated with the virtual environment being displayed to the wearer comprises indicia associated with one or more objects in the virtual environment, indicia associated with user position in the virtual environment, and indicia associated with user orientation in the virtual environment.
- 16 . The method of claim 13 , wherein data associated with the wearer of the headset device comprises input submitted by the wearer of the headset device to an interface of the headset device.
- 17 . The method of claim 16 , wherein the interface of the headset device comprises a microphone and the input submitted by the wearer comprises a spoken phrase.
- 18 . The method of claim 16 , wherein the interface of the headset device comprises a user interface element displayed on the first display screen and the input submitted by the wearer comprises an interaction with the user interface element.
- 19 . The method of claim 13 , wherein the user interface displayed on the second display screen comprises a screen capture of the virtual environment as displayed to the wearer of the headset device via the first display screen.
- 20 . The method of claim 19 , wherein the user interface displayed on the second display screen comprises a user interface element requesting input from the one or more other persons in proximity to the wearer of the headset device.
Description
TECHNICAL FIELD This application relates generally to methods and apparatuses, including computer program products, for a dual screen extended reality (XR) headset device and methods of operating same. BACKGROUND Generally, extended reality (XR) software applications provide for an experience in which a user's real-world viewing perspective is replaced by or enhanced with a virtual 3D environment. In the context of this application, the term “alternative reality” encompasses all different types of virtual experiences, including but not limited to virtual reality (VR), augmented reality (AR), mixed reality (MR), and others. A user wears a headset, glasses, or similar apparatus that includes specialized internal display devices to render the virtual environment to the user, and the headset can include certain components (e.g., gyroscope(s), accelerometer(s), magnetometer(s), etc.) that detect and capture the user's head movements in order to update the virtual environment in response to the movements in a seamless, real-time manner. Exemplary headsets comprise the Meta Quest™ 3S available from Meta Platforms, Inc., the Apple Vision Pro™ from Apple Inc., and the HTC Vive™ Pro 2 headset available from HTC Corp. Most current-generation XR headset devices are limited to having a single display screen that is visible only to the person that is wearing the headset. This configuration limits the ability for others in proximity to the wearer to interact with the headset and/or view the virtual environment being displayed to the wearer. For example, the wearer may encounter difficulties in operating the headset device, launching or interacting with a software application installed on the headset device, or navigating through the virtual environment. An assistant may be in proximity to the wearer, in order to monitor the wearer and provide guidance and assistance as needed. Because the assistant cannot see the internal display screen, the assistant is unable to precisely determine the wearer's current state in relation to the virtual environment-which reduces the capability for the assistant to provide helpful input or troubleshooting. As a result, the wearer usually must remove the headset to get assistance, interrupting the user experience in the virtual environment. Some headset devices, such as the Apple Vision Pro™, incorporate external display features that enable a person who is not wearing the headset to see certain visual aspects on the front of the device—e.g., a representation of the wearer's eyes, a boot indicator when powering up, or setup guidance during a persona configuration. However, these visual aspects are limited and do not provide any information on a virtual environment being displayed to the wearer or information regarding the operation of other application software that is executing on the headset device for the purpose of assisting the wearer. In addition, these types of display features lack a mechanism for the other person to interact with the headset device and provide instructions or commands to modify the operation of the headset device, the virtual environment, and/or the application software executing on the headset. In addition, some XR software application workflows can be cumbersome or undesirable when using devices with a single internal display. As an example, typical authentication routines handled in XR software require a user to manually provide authentication credentials via a user interface displayed on the internal screen. Multi-factor authentication (MFA) routines are particularly problematic, because they often require a user to operate a separate computing device in responding to a secondary authentication challenge. A user wearing an XR headset must remove the headset to complete the MFA process, which degrades the immersion and natural flow of the virtual experience. Existing solutions that attempt to overcome the above-described challenges may utilize screen casting techniques that replicate what the wearer sees on the headset device on a separate computing desktop or tablet device. However, this type of screen casting is typically not effective for aiding a headset wearer due to latency between the wearer's actions/experience and what is displayed on the secondary device. In addition, traditional screen casting systems do not provide the other person with a capability to interact with the headset device and provide instructions or commands to modify the operation of the headset device, the virtual environment, and/or the application software executing on the headset. Screen casting techniques also cannot be used effectively in an environment where a person is tasked with assisting multiple different headset wearers simultaneously. SUMMARY The methods and systems described herein overcome the above-described technical deficiencies using a dual screen XR headset device, which includes a second, externally-facing display screen that is either embedded in the headset devic