US-12620183-B2 - Moving media in extended reality
Abstract
A method of a system of one or more electronic devices supports an extended reality application at a user device. The method includes receiving location and pose information of the user device related to an extended reality environment, determining at least one dynamic content unit relevant to the location and pose information, determining a range of motion of the at least one dynamic content unit, determining semantic information for the location and pose of the user device, generating a semantic map from the semantic information, applying at least one access control to the semantic map to prevent display of dynamic content on the dynamic content unit at a location in the semantic map, querying a dynamic content manager for dynamic content to be displayed as an extended reality overlay, and returning the dynamic content to the user device.
Inventors
- Paul McLachlan
- Héctor Caltenco
Assignees
- TELEFONAKTIEBOLAGET LM ERICSSON (PUBL)
Dates
- Publication Date
- 20260505
- Application Date
- 20210311
Claims (18)
- 1 . A method of a system of one or more electronic devices to support an extended reality application at a user device, the method comprising: receiving, from the user device, location and pose information of the user device related to an extended reality environment of the extended reality application; determining at least one dynamic content unit, in the extended reality environment, relevant to the location and pose information of the user device, wherein the at least one dynamic content unit is defined in three dimensions, and can be projected or rendered in extended reality in two or three dimensions based on perspective and distance relative to the user device; determining a range of motion of the at least one dynamic content unit, wherein the at least one dynamic content unit moves through the extended reality environment; determining semantic information for the location and pose of the user device; generating a semantic map from the semantic information; determining at least one access control that applies to the semantic map; applying the at least one access control to the semantic map to prevent display of dynamic content on the at least one dynamic content unit at a location in the semantic map; querying a dynamic content manager for dynamic content to be displayed on the at least one dynamic content unit; and returning the dynamic content to the user device.
- 2 . The method of claim 1 , further comprising: receiving dynamic environment information from the user device; determining semantic information from the dynamic environment information; and generating the semantic map from the semantic information including the semantic information from the dynamic environment information.
- 3 . The method of claim 1 , wherein the at least one access control prevents display of dynamic content based on semantic information from static environment information.
- 4 . The method of claim 1 , wherein the at least one access control prevents display of dynamic content based on semantic information from dynamic environment information.
- 5 . The method of claim 1 , further comprising: monitoring dynamic environment information including location and position information of the user device and location information of an extended reality overlay; and applying at least one access control to the dynamic content based on dynamic environment information.
- 6 . The method of claim 5 , wherein applying the at least one access control prevents display of the dynamic content over one or more static or dynamic objects in the semantic map that belong to one or more restricted class of objects.
- 7 . A system of one or more electronic devices to support an extended reality application at a user device, the system comprising: a non-transitory machine-readable medium having stored therein dynamic content unit services; and one or more processor coupled to the non-transitory machine-readable medium, the one or more processor to execute the dynamic content unit services, the dynamic content unit services to receive, from the user device, location and pose information of the user device related to an extended reality environment of the extended reality application, determine at least one dynamic content unit, in the extended reality environment, relevant to the location and pose information of the user device, wherein the at least one dynamic content unit is defined in three dimensions, and can be projected or rendered in extended reality in two or three dimensions based on perspective and distance relative to the user device, determine a range of motion of the at least one dynamic content unit, wherein the at least one dynamic content unit moves through the extended reality environment, determine semantic information for the location and pose of the user device, generate a semantic map from the semantic information, determine at least one access control that applies to the semantic map, apply the at least one access control to the semantic map to prevent display of dynamic content on the at least one dynamic content unit at a location in the semantic map, query a dynamic content manager for dynamic content to be displayed as an extended reality overlay, and return the dynamic content to the user device.
- 8 . The system of one or more electronic devices to support an extended reality application at a user device of claim 7 , wherein the dynamic content unit services are further to receive dynamic environment information from the user device, determine semantic information from the dynamic environment information, and generate the semantic map from the semantic information including the semantic information from the dynamic environment information.
- 9 . The system of one or more electronic devices to support an extended reality application at a user device of claim 7 , wherein the at least one access control prevents display of dynamic content based on semantic information from static environment information.
- 10 . The system of one or more electronic devices to support an extended reality application at a user device of claim 7 , wherein the at least one access control prevents display of dynamic content based on semantic information from dynamic environment information.
- 11 . The system of one or more electronic devices to support an extended reality application at a user device of claim 7 , wherein the dynamic content unit services are further to monitor dynamic environment information including location and position information of the user device and location information of the extended reality overlay, and apply at least one access control to the dynamic content based on dynamic environment information.
- 12 . The system of one or more electronic devices to support an extended reality application at a user device of claim 11 , wherein applying the at least one access control prevents display of the dynamic content over one or more static or dynamic objects in the semantic map that belong to one or more restricted class of objects.
- 13 . A non-transitory machine-readable medium having stored therein a set of instructions, which when executed by an electronic device cause the electronic device to perform a set of operations, the set of operations comprising: receiving, from a user device, location and pose information of the user device related to an extended reality environment of an extended reality application; determining at least one dynamic content unit, in the extended reality environment, relevant to the location and pose information of the user device, wherein the at least one dynamic content unit is defined in three dimensions, and can be projected or rendered in extended reality in two or three dimensions based on perspective and distance relative to the user device; determining a range of motion of the at least one dynamic content unit, wherein the at least one dynamic content unit moves through the extended reality environment; determining semantic information for the location and pose of the user device; generating a semantic map from the semantic information; determining at least one access control that applies to the semantic map; applying the at least one access control to the semantic map to prevent display of dynamic content on the at least one dynamic content unit at a location in the semantic map; querying a dynamic content manager for dynamic content to be displayed as an extended reality overlay; and returning the dynamic content to the user device.
- 14 . The non-transitory machine-readable medium of claim 13 , having further instructions stored therein, which when executed by the electronic device cause the electronic device to perform further operations comprising: receiving dynamic environment information from the user device; determining semantic information from the dynamic environment information; and generating the semantic map from the semantic information including the semantic information from the dynamic environment information.
- 15 . The non-transitory machine-readable medium of claim 13 , wherein the at least one access control prevents display of dynamic content based on semantic information from static environment information.
- 16 . The non-transitory machine-readable medium of claim 13 , wherein the at least one access control prevents display of dynamic content based on semantic information from dynamic environment information.
- 17 . The non-transitory machine-readable medium of claim 13 , further comprising: monitoring dynamic environment information including location and position information of the user device and location information of the extended reality overlay; and applying at least one access control to the dynamic content based on dynamic environment information.
- 18 . The non-transitory machine-readable medium of claim 17 , wherein applying the at least one access control prevents display of the dynamic content over one or more static or dynamic objects in the semantic map that belong to one or more restricted class of objects.
Description
CROSS-REFERENCE TO RELATED APPLICATIONS This application is a National stage of International Application No. PCT/IB2021/052054, filed Mar. 11, 2021, which is hereby incorporated by reference. TECHNICAL FIELD Embodiments of the invention relate to the field of extended reality; and more specifically, to support for moving dynamic content in extended reality and the restriction of dynamic content in extended reality. BACKGROUND ART Augmented reality (AR) augments the real world and the physical objects in the real world by overlaying virtual content. This virtual content is often produced digitally and may incorporate sound, graphics, and video. For example, a shopper wearing augmented reality glasses while shopping in a supermarket might see nutritional information for each object as they place it in their shopping cart. The glasses augment reality with information. Virtual reality (VR) uses digital technology to create an entirely simulated environment. Unlike AR, which augments reality, VR immerses users inside an entirely simulated experience. In a fully VR experience, all visuals and sounds are produced digitally and do not include input from the user's actual physical environment. For example, VR may be integrated into manufacturing where trainees practice building machinery in a virtual reality before starting on the real production line. Mixed reality (MR) combines elements of both AR and VR. In the same vein as AR, MR environments overlay digital effects on top of the user's physical environment. MR also integrates additional, richer information about the user's physical environment such as depth, dimensionality, and surface textures. In MR environments, the end user experience more closely resembles the real world. As an example, consider two users hitting a MR tennis ball on a real-world tennis court. MR incorporates information about the hardness of the surface (grass versus clay), the direction and force the racket struck the ball, and the players' height. Augmented reality and mixed reality are often used to refer to the same idea. As used herein, “augmented reality” also refers to mixed reality. Extended reality (XR) is an umbrella term referring to all real-and-virtual combined environments, such as AR, VR and MR. XR refers to a wide variety and vast number of levels in the reality-virtuality continuum of the perceived environment, consolidating AR, VR, MR and other types of environments (e.g., augmented virtuality, mediated reality, etc.) under one term. An XR user device is the device used as an interface for the user to perceive both virtual and/or real content in the context of extended reality. Example embodiments of XR user devices are described herein in reference to FIGS. 5, 6, and 8-10 as devices (501, 810A-C, 900, and 1100A-F). An XR user device typically has a display that may be opaque and displays both the environment (real or virtual) and virtual content together (i.e., video see-through) or overlay virtual content through a semi-transparent display (optical see-through). The XR user device may acquire information about the environment through the use of sensors (typically cameras and inertial sensors) to map the environment while simultaneously tracking the device's location within the environment. Object recognition in extended reality is mostly used to detect real world objects and for triggering the display of digital content. For example, a consumer can look at a fashion magazine with augmented reality glasses and a video of a catwalk event would play instantly. Sound, smell, and touch are also considered objects subject to object recognition. For example, a diaper advertisement could be displayed when a sound or mood of a crying baby is detected. Mood could be deduced from machine learning applied to the sound data. SUMMARY In one embodiment, a method of a system of one or more electronic devices supports an extended reality application at a user device. The method includes receiving, from the user device, location and pose information of the user device related to an extended reality environment of the extended reality application, determining at least one dynamic content unit, in the extended reality environment, relevant to the location and pose information of the user device, determining a range of motion of the at least one dynamic content unit, wherein the at least one dynamic content unit moves through the extended reality environment; determining semantic information for the location and pose of the user device, generating a semantic map from the semantic information, determining at least one access control that applies to the semantic map, applying the at least one access control to the semantic map to prevent display of dynamic content on the dynamic content unit at a location in the semantic map, querying a dynamic content manager for dynamic content to be displayed as an extended reality overlay, and returning the dynamic content to the user device. In another