US-12627952-B2 - Methods and apparatus for supporting collaborative extended reality (XR)
Abstract
A method performed by an anchor wireless transmit/receive unit (WTRU) having an Extended Reality (XR) application includes transmitting to a base station (BS) a request for a candidate set of collaborative WTRUs, the request including pose information to identify the candidate set of collaborative WTRUs, receiving from the BS a first set of collaborative WTRUs, determining, by the anchor WTRU, a second set of WTRUs from the first set of WTRUs that align with at least one of a Field of View (FoV) requirement, the pose information, and XR application parameters of the anchor WTRU, and transmitting, to the BS, an indication of the second set of WTRUs as selected collaborative WTRUs.
Inventors
- Jaya Rao
- Anthony Laurent
- Tejaswinee Lutchoomun
- SENAY NEGUSSE
- Ghyslain Pelletier
- Janet Stern-Berkowitz
- Benoit Pelletier
- Caroline BAILLARD
- Vincent Alleaume
- Nicolas MOLLET
Assignees
- INTERDIGITAL PATENT HOLDINGS, INC.
Dates
- Publication Date
- 20260512
- Application Date
- 20221102
Claims (20)
- 1 . A method performed by an anchor wireless transmit/receive unit (WTRU) having an Extended Reality (XR) application, the method comprising: transmitting, to a base station (BS), a request for a candidate set of collaborative WTRUs, the request including location parameters to identify the candidate set of collaborative WTRUs; receiving, from the BS, a first set of collaborative WTRUs associated with the location parameters; selecting, by the anchor WTRU, a second set of WTRUs from the first set of WTRUs that are compatible with Field of View (FoV), pose information, and the XR application of the anchor WTRU; and transmitting, to the BS, an indication of the second set of WTRUs as selected collaborative WTRUs.
- 2 . The method of claim 1 , further comprising: transmitting, to the selected collaborative WTRUs, an indication of FoV dimensions used in conjunction with the XR application of the anchor WTRU.
- 3 . The method of claim 1 , wherein receiving, from the BS, the first set of collaborative WTRUs further comprises receiving a first set of WTRUs that align with pose information of the anchor WTRU.
- 4 . The method of claim 1 , wherein selecting, by the anchor WTRU, the second set of WTRUs is preceded by: transmitting, to one or more WTRUs of the first set of WTRUs, a request to provide FoV information to the anchor WTRU; and receiving FoV information from the one or more WTRUs of the first set of WTRUs.
- 5 . The method of claim 4 , wherein receiving FoV information comprises receiving the FoV information via sidelink from a responding WTRU.
- 6 . The method of claim 4 , wherein transmitting, to the one or more WTRUs of the first set of WTRUs, a request to provide FoV information to the anchor WTRU further comprises transmitting a requirement of (i) FoV parameters expected to be captured during acquisition of sensing data and (ii) latency reporting of the sensing data in an uplink communication to the selected collaborative WTRUs.
- 7 . The method of claim 1 , wherein transmitting to the BS the request for the candidate set of collaborative WTRUs further comprises transmitting pose information of the anchor WTRU, and at least one of a minimum distance and orientation requirement relative to the anchor WTRU.
- 8 . The method of claim 1 , wherein receiving, from the BS, the first set of collaborative WTRUs comprises receiving a set of WTRU sidelink identifiers and associated location parameters of respective WTRUs in the first set of collaborative WTRUs.
- 9 . The method of claim 1 , wherein transmitting, to the BS, an indication of the second set of WTRUs comprises transmitting sidelink identifiers of the second set of WTRUs.
- 10 . An anchor wireless transmit/receive unit (WTRU) having an Extended Reality (XR) application, the anchor WTRU comprising circuitry, including a transmitter, a receiver, a processor, and memory, the anchor WTRU configured to: transmit, to a base station (BS), a request for a candidate set of collaborative WTRUs, the request including location parameters to identify the candidate set of collaborative WTRUs; receive, from the BS, a first set of collaborative WTRUs associated with the location parameters; select, by the anchor WTRU, a second set of WTRUs from the first set of WTRUs that are compatible with Field of View (FoV), pose information, and XR application parameters of the anchor WTRU; and transmit, to the BS, an indication of the second set of WTRUs as selected collaborative WTRUs.
- 11 . The anchor WTRU of claim 10 , further configured to transmit, to the selected collaborative WTRUs, an indication of FoV dimensions used in conjunction with the XR application of the anchor WTRU.
- 12 . The anchor WTRU of claim 10 , further configured to receive, from the BS, the first set of collaborative WTRUs that align with the pose information.
- 13 . The anchor WTRU of claim 10 , further configured to, before determining a second set of WTRUs: transmit, to one or more of the WTRUs of the first set of WTRUs, a request to provide FoV information to the anchor WTRU; and receive FoV information from the one or more of the WTRUs of the first set of WTRUs.
- 14 . The anchor WTRU of claim 13 , configured to receive FoV information via sidelink from a responding WTRU.
- 15 . The anchor WTRU of claim 13 , configured to transmit, to the one or more WRTUs of the first set of WTRUs, a request to provide FoV information to the anchor WTRU by transmitting a requirement of (i) FoV parameters expected to be captured during acquisition of sensing data and (ii) latency reporting of the sensing data in an uplink communication to the selected collaborative WTRUs.
- 16 . The anchor WTRU of claim 10 , configured to transmit the request for the candidate set of collaborative WTRUs by transmitting pose information of the anchor WTRU and at least one of a minimum distance and orientation requirement relative to the anchor WTRU.
- 17 . The anchor WTRU of claim 10 , configured to receive the first set of collaborative WTRUs by receiving a set of WTRU sidelink identifiers and associated location parameters of respective WTRUs in the first set of collaborative WTRUs.
- 18 . The anchor WTRU of claim 10 , configured to transmit, to the BS, an indication of the selected collaborative WTRUs by transmitting sidelink identifiers of the selected WTRUs to the BS.
- 19 . A non-transient computer-readable storage medium comprising instructions which when executed by a computer cause the computer to carry out the method comprising: transmitting, to a base station (BS), a request for a candidate set of collaborative WTRUs, the request including location parameters to identify the candidate set of collaborative WTRUs; receiving, from the BS, a first set of collaborative WTRUs associated with the location parameters; selecting, by the anchor WTRU, a second set of WTRUs from the first set of WTRUs that are compatible with Field of View (FoV), pose information, and an Extended Reality (XR) application of the anchor WTRU; and transmitting, to the BS, an indication of the second set of WTRUs as selected collaborative WTRUs.
- 20 . The non-transient computer-readable storage medium of claim 19 , wherein transmitting to the BS the request for the candidate set of collaborative WTRUs further comprises transmitting pose information of the anchor WTRU, and at least one of a minimum distance and orientation requirement relative to the anchor WTRU.
Description
CROSS REFERENCE TO RELATED APPLICATIONS This application is a U.S. National Stage Application under 35 U.S.C. § 371 of International Patent Application No. PCT/US2022/048687, filed 2 Nov. 2022, which is incorporated herein by reference in its entirety. This application claims the benefit of U.S. Provisional Patent Application Nos. 63/275,316 filed on 3 Nov. 2021, 63/326,631 filed on 1 Apr. 2022, and 63/395,176 filed on 4 Aug. 2022, each of which is incorporated herein by reference in its entirety. TECHNICAL FIELD The descriptions to follow include methods and apparatus for supporting collaborative eXtended Reality (XR) in which multiple devices contribute to an immersive Virtual Reality (VR) experience. BACKGROUND The term eXtended Reality (XR) is an umbrella term for different types of immersive experiences including Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR) and the realities interpolated among them. Virtual Reality (VR) is a rendered version of a delivered visual and audio scene. The rendering is designed to mimic the visual, stereoscopic (3D) and audio sensory stimuli of the real world as naturally as possible to an observer or user as they move within the limits defined by the application. Augmented Reality (AR) is when a user is provided with additional information or artificially generated items or content overlaid upon their current environment. Mixed Reality (MR) is an advanced form of AR where some virtual elements are inserted into the physical scene with the intent to provide the illusion that these elements are part of the real scene. XR may include all real-and-virtual combined environments and human-machine interactions generated by computer technology and wearables. The term eXtended Reality (XR) is an umbrella term for different types of immersive experiences including Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR) and the realities interpolated among them. Virtual Reality (VR) is a rendered version of a delivered visual and audio scene. The rendering is designed to mimic the visual, stereoscopic (3D) and audio sensory stimuli of the real world as naturally as possible to an observer or user as they move within the limits defined by the application. Augmented Reality (AR) is when a user is provided with additional information or artificially generated items or content overlaid upon their current environment. Mixed Reality (MR) is an advanced form of AR where some virtual elements are inserted into the physical scene with the intent to provide the illusion that these elements are part of the real scene. XR may include all real-and-virtual combined environments and human-machine interactions generated by computer technology and wearables. The notion of immersion in the context of XR applications/services refers to the sense of being surrounded by the virtual environment as well as providing the feeling of being physically and spatially located in the virtual environment. The levels of virtuality may range from partial sensory inputs to fully immersive multi-sensory inputs leading to a virtual reality practically indiscernible from actual reality. Enabling immersive experiences will involve creation/definition of an experience space and accurate spatial mapping/sensing (e.g., using visual sensors, RF sensors) which may not be feasible for individual devices using existing mechanisms. From an application perspective, leveraging multiple devices allows augmenting and/or widening of WTRU's FoV, as well as accounts for blockages, occlusions, and blind spots. From a connectivity perspective, leveraging multiple devices allows alleviating the load on the Uu links and other interfaces (e.g., sidelink (SL) interfaces) of one or more WTRUs, involved in performing similar spatial mapping/sensing, by taking into account redundancy in the data content on Uu links and other interfaces. Collaborative groups of devices can be used for fast and efficient discovery of devices with XR capabilities (e.g., visual sensing) over multiple interfaces (e.g., Uu, SL) and enable fast connectivity establishment. However, enabling the collaborative group to be dynamically updated (e.g., by selecting and including new devices and releasing existing devices) to ensure continuity of immersive experience to the user, based on user movement and changes in user FoV or extended FoV can be challenging. In this regard, the challenge to be addressed is how a WTRU may dynamically coordinate a collaborate group formation/modification with multiple devices with similar/different capabilities, considering WTRU movement, for supporting XR experience and ensuring XR experience continuity. BRIEF DESCRIPTION OF THE DRAWINGS A more detailed understanding may be had from the detailed description below, given by way of example in conjunction with the drawings appended hereto. Figures in such drawings, like the detailed description, are exemplary. As such, the Figures and the detailed description