Search

EP-4736406-A1 - PASSTHROUGH VIEWING OF REAL-WORLD ENVIRONMENT FOR EXTENDED REALITY HEADSET TO SUPPORT USER SAFETY AND IMMERSION

EP4736406A1EP 4736406 A1EP4736406 A1EP 4736406A1EP-4736406-A1

Abstract

A method includes obtaining real-time video captured using one or more imaging sensors of an immersive headset. The method also includes processing the real-time video to identify one or more real-world objects and render the one or more real-world objects on at least one display of the immersive headset. The method further includes allowing a user of the immersive headset to select at least one of the one or more real-world objects or at least one spatial volume containing the at least one real-world object. The method still further includes displaying an extended reality view on the at least one display while overlaying, on the extended reality view, a representation of at least a portion of the at least one real-world object or the at least one spatial volume in one of multiple modes that each show the representation differently.

Inventors

  • EVANGELISTA, Edgar Charles
  • HORII, HIROSHI
  • WAGHULDE, Ajinkya Kiran
  • JIANG, Yujie

Assignees

  • Samsung Electronics Co., Ltd.

Dates

Publication Date
20260506
Application Date
20240830

Claims (15)

  1. A method comprising: obtaining real-time video captured using one or more imaging sensors of an immersive headset; processing the real-time video to identify one or more real-world objects and render the one or more real-world objects on at least one display of the immersive headset; allowing a user of the immersive headset to select at least one of the one or more real-world objects or at least one spatial volume containing the at least one real-world object; and displaying an extended reality view on the at least one display while overlaying, on the extended reality view, a representation of at least a portion of the at least one real-world object or the at least one spatial volume in one of multiple modes that each show the representation differently.
  2. The method of Claim 1, wherein the multiple modes include: a safety mode in which, when one of the one or more real-world objects is within a first distance from the immersive headset, the representation includes an entirety of the one of the one or more real-world objects; a moderate mode in which, when the one of the one or more real-world objects is within a second distance from the immersive headset, the representation includes only an outline of the one of the one or more real-world objects, the first distance shorter than the shorter distance; and a balanced mode in which, when the one of the one or more real-world objects is within the second distance from the immersive headset, the representation includes the entirety of the one of the one or more real-world objects.
  3. The method of Claim 1, wherein the multiple modes include: a safety mode in which the representation includes an entirety of one of the one or more real-world objects; a moderate mode in which the representation includes only an outline of the one of the one or more real-world objects, wherein the one of the one or more real-world objects is within a specified distance from the immersive headset; and a balanced mode in which the representation includes the entirety of the one of the one or more real-world objects, wherein the one of the one or more real-world objects is within the specified distance from the immersive headset.
  4. The method of Claim 1, further comprising: providing a user interface that allows the user to select the one of the multiple modes.
  5. The method of Claim 4, wherein the user interface comprises a slider that allows the user to adjust a degree of displaying the at least one selected real-world object or the at least one selected spatial volume.
  6. The method of Claim 1, wherein processing the real-time video to identify the one or more real-world objects comprises at least one of: enabling the user to manually place at least one volume shape over at least one of the one or more real-world objects; suggesting at least one volume shape for at least one of the one or more real-world objects to the user; or automatically placing at least one volume shape over at least one of the one or more real-world objects.
  7. The method of Claim 1, wherein processing the real-time video to identify the one or more real-world objects comprises: providing a user interface that allows the user to modify at least one volume shape placed over at least one of the one or more real-world objects by at least one of translation, rotation, or scaling of the at least one volume shape.
  8. An electronic device comprising: one or more imaging sensors; at least one display; and at least one processing device configured to: obtain real-time video captured using the one or more imaging sensors; process the real-time video to identify one or more real-world objects; render the one or more real-world objects on the at least one display; allow a user to select at least one of the one or more real-world objects or at least one spatial volume containing the at least one real-world object; and initiate display of an extended reality view on the at least one display while overlaying, on the extended reality view, a representation of at least a portion of the at least one real-world object or the at least one spatial volume in one of multiple modes that each show the representation differently.
  9. The electronic device of Claim 8, wherein the multiple modes include: a safety mode in which, when one of the one or more real-world objects is within a first distance from the electronic device, the representation includes an entirety of the one of the one or more real-world objects; a moderate mode in which, when the one of the one or more real-world objects is within a second distance from the electronic device, the representation includes only an outline of the one of the one or more real-world objects, the first distance shorter than the shorter distance; and a balanced mode in which, when the one of the one or more real-world objects is within the second distance from the electronic device, the representation includes the entirety of the one of the one or more real-world objects.
  10. The electronic device of Claim 8, wherein the multiple modes include: a safety mode in which the representation includes an entirety of one of the one or more real-world objects; a moderate mode in which the representation includes only an outline of the one of the one or more real-world objects, wherein the one of the one or more real-world objects is within a specified distance from the electronic device; and a balanced mode in which the representation includes the entirety of the one of the one or more real-world objects, wherein the one of the one or more real-world objects is within the specified distance from the electronic device.
  11. The electronic device of Claim 8, wherein the at least one processing device is further configured to provide a user interface that allows the user to select the one of the multiple modes.
  12. The electronic device of Claim 11, wherein the user interface comprises a slider that allows the user to adjust a degree of displaying the at least one selected real-world object or the at least one selected spatial volume.
  13. The electronic device of Claim 8, wherein, to process the real-time video to identify the one or more real-world objects, the at least one processing device is configured to at least one of: enable the user to manually place at least one volume shape over at least one of the one or more real-world objects; suggest at least one volume shape for at least one of the one or more real-world objects to the user; or automatically place at least one volume shape over at least one of the one or more real-world objects.
  14. The electronic device of Claim 8, wherein, to process the real-time video to identify the one or more real-world objects, the at least one processing device is configured to provide a user interface that allows the user to modify at least one volume shape placed over at least one of the one or more real-world objects by at least one of translation, rotation, or scaling of the at least one volume shape.
  15. A non-transitory machine readable medium containing instructions that when executed cause at least one processor of an electronic device to: obtain real-time video captured using one or more imaging sensors; process the real-time video to identify one or more real-world objects; render the one or more real-world objects on at least one display; allow a user to select at least one of the one or more real-world objects or at least one spatial volume containing the at least one real-world object; and initiate display of an extended reality view on the at least one display while overlaying, on the extended reality view, a representation of at least a portion of the at least one real-world object or the at least one spatial volume in one of multiple modes that each show the representation differently.

Description

PASSTHROUGH VIEWING OF REAL-WORLD ENVIRONMENT FOR EXTENDED REALITY HEADSET TO SUPPORT USER SAFETY AND IMMERSION This disclosure relates generally to extended reality (XR) systems and processes. More specifically, this disclosure relates to passthrough viewing of a real-world environment for an XR headset to support user safety and immersion. Virtual reality (VR) and other extended reality (XR) technologies have seen rapid advancements in recent years, in some cases allowing users to be immersed in fully-digital or computer-simulated environments. While these environments can be richly detailed and engaging, users are often completely isolated from the real world around them when wearing XR headsets. This isolation can lead to potential safety issues, especially if the user needs to move in a physical space or if there are obstacles or other individuals nearby. For a more complete understanding of this disclosure and its advantages, reference is now made to the following description taken in conjunction with the accompanying drawings, in which like reference numerals represent like parts: FIG.1 illustrates an example network configuration providing a passthrough view of obstacles in a surrounding real-world environment for an immersive headset user in accordance with this disclosure; FIG.2a through FIG.2c illustrate, using images representing a real-world environment for an immersive headset user, an example provision of a passthrough view of the surrounding real-world environment in accordance with this disclosure; FIG.3a through FIG.3c illustrate, using images representing a real-world environment for an immersive headset user, an example object registration for provision of a passthrough view of the surrounding real-world environment in accordance with this disclosure; FIG.4a through FIG.4c illustrate, using images representing an immersive reality overlay on a real-world environment, an example manual placement of a volume on an object in a real-world environment in accordance with this disclosure; FIG.5a through FIG.5c illustrate an example shape modification during manual placement of a spatial volume (FIG. 3a) and an example modification of a suggested spatial volume (FIG. 3b) overlaid on a real-world object in accordance with this disclosure; FIG.6a through FIG.6c illustrate an example shape recognition for a real-world object in accordance with this disclosure; FIG.7 illustrates example training and deployment of a machine learning model for shape recognition of a real-world object in accordance with this disclosure; FIG.8 illustrates example spatial volumes for provision of a passthrough view of objects in a real-world environment following object registration in accordance with this disclosure; FIG.9a and FIG.9b illustrate, using images representing a real-world environment for an immersive headset user, example spatial volume activations for provision of a passthrough view of the surrounding real-world environment in accordance with this disclosure; FIG.10a through FIG.10c illustrate an example toggling spatial volume activation for a passthrough view of objects in the surrounding real-world environment in accordance with this disclosure; FIG.11a through FIG.11c illustrate, using images representing an immersive environment, an example tracking of real-world objects for provision of a passthrough view in accordance with this disclosure; FIG.12 illustrates example immersion safety modes for a passthrough view of real-world objects in an immersive environment in accordance with this disclosure; FIG.13a and FIG.13b illustrate, using images representing an immersive environment, an example passthrough view of obstacles in the real-world environment surrounding an immersive headset user while in a safe immersion safety mode in accordance with this disclosure; FIG.14a and FIG.14b illustrate, using images representing an immersive environment, an example passthrough view of obstacles in the real-world environment surrounding an immersive headset user while in a balanced immersion safety mode in accordance with this disclosure; FIG.15a and FIG.15b illustrate, using images representing an immersive environment, an example passthrough view of obstacles in the real-world environment surrounding an immersive headset user while in a moderate immersion safety mode in accordance with this disclosure; FIG.16 illustrates an example process for providing a passthrough view of obstacles in the surrounding real-world environment for an immersive headset user in accordance with this disclosure; and FIG.17 illustrates another example process for providing a passthrough view of obstacles in the surrounding real-world environment for an immersive headset user in accordance with this disclosure. FIG.S 1 through 17, discussed below, and the various embodiments of this disclosure are described with reference to the accompanying drawings. However, it should be appreciated that this disclosure is not limited to these embodim