Search

EP-4740483-A1 - SHARED EVENT-BASED UPDATE IN SCENE DESCRIPTION

EP4740483A1EP 4740483 A1EP4740483 A1EP 4740483A1EP-4740483-A1

Abstract

Some embodiments of a method may include: obtaining scene description data for a three- dimensional (3D) scene, wherein the scene description data comprises behavior information comprising: trigger information describing at least one trigger condition, action information describing an action to perform on a scene element in the 3D scene, and a parameter indicating whether a behavior is to be shared with at least one other device; and responsive to determining that: (i) the parameter indicates that the behavior is to be shared with at least one other user device, (ii) a trigger condition of the at least one trigger condition has occurred, performing a trigger process comprising: performing the action locally on the scene element; and communicating, to the at least one other device, the behavior information describing the behavior. and may perform additional changes according to scene updates.

Inventors

  • LELIEVRE, SYLVAIN
  • JOUET, PIERRICK
  • HIRTZLIN, PATRICE
  • FAIVRE D'ARCIER, Etienne
  • FONTAINE, Loic

Assignees

  • InterDigital CE Patent Holdings, SAS

Dates

Publication Date
20260513
Application Date
20240627

Claims (20)

  1. 1. A method comprising: obtaining scene description data for a three-dimensional (3D) scene, wherein the scene description data comprises behavior information comprising: trigger information describing at least one trigger condition, action information describing an action to perform on a scene element in the 3D scene, and a parameter indicating whether a behavior is to be shared with at least one other device; and responsive to determining that: (i) the parameter indicates that the behavior is to be shared with at least one other user device, (ii) a trigger condition of the at least one trigger condition has occurred, performing a trigger process comprising: performing the action locally on the scene element; and communicating, to the at least one other device, the behavior information describing the behavior.
  2. 2. The method of claim 1 , wherein performing the trigger process is further responsive to determining that the action information indicates that the action is to be performed.
  3. 3. The method of any one of claims 1-2, wherein the trigger process further comprises communicating, to a server, behavior information.
  4. 4. The method of claim 3, wherein the behavior information comprises additional information to perform the action.
  5. 5. The method of claim 4, wherein the additional information comprises information indicating where a gesture was detected.
  6. 6. The method of any one of claims 1-5, wherein the action information comprises: a list of at least a set action, and a set action information.
  7. 7. The method of any one of claims 1-6, wherein the action information comprises an update action, and wherein performing the action comprises updating the scene element based on the update action.
  8. 8. The method of claim 7, further comprising receiving, from a server, the scene update message.
  9. 9. The method of any one of claims 1 -8, wherein performing the action comprises performing a temporary update to the scene element.
  10. 10. The method of claim 9, further comprising: receiving, from a server, a scene update message; undoing the temporary update to the scene element; and performing a secondary action on the scene element, wherein the secondary action is indicated in the scene update message.
  11. 11. The method of claim 9, further comprising: receiving, from a server, a scene update message; merging the temporary update with a secondary action to generate a tertiary action; and performing a tertiary action on the scene element, wherein the secondary action is indicated in the scene update message.
  12. 12. The method of any one of claims 1-11 , wherein determining that the trigger condition has occurred comprises determining that the trigger condition has produced a true result.
  13. 13. The method of any one of claims 1-12, wherein determining that the parameter indicates that the behavior is to be shared with at least one other user device comprises determining that the parameter indicates a true state.
  14. 14. The method of any one of claims 1-13, wherein performing the action locally on the scene element is performed before communicating, to the at least one other device, the action information describing the action.
  15. 15. The method of any one of claims 1-14, wherein the parameter comprises a shared parameter identified as "shared” in accordance with the MPEG-I Scene Description framework.
  16. 16. The method of any one of claims 1-14, wherein the parameter comprises a shared parameter identified as "shared” in accordance with the gITF scene description format.
  17. 17. An apparatus comprising: a processor; and a non-transitory computer-readable medium storing instructions operative, when executed by the processor, to cause the apparatus to perform any one of the methods of claims 1 to 16.
  18. 18. A method comprising: performing an update to a 3D scene, wherein the 3D scene is shared with at least a local first device and a second device, each connected to the 3D scene, wherein the update is performed first on the local first device and performed later on the second device, and wherein the local first device is distinct from the second device.
  19. 19. The method of claim 18, wherein sharing the 3D scene with at least the local first device and a second device is responsive to a parameter indicating whether a behavior is to be shared with at least one other device.
  20. 20. An apparatus comprising: a processor; and a non-transitory computer-readable medium storing instructions operative, when executed by the processor, to cause the apparatus to perform any one of the methods of claims 18 to 19.

Description

SHARED EVENT-BASED UPDATE IN SCENE DESCRIPTION CROSS-REFERENCE TO RELATED APPLICATIONS [0001] The present application claims benefit of European Patent Application No. EP23306138, entitled "SHARED EVENT-BASED UPDATE IN SCENE DESCRIPTION” and filed July 6, 2023, which is hereby incorporated by reference in its entirety. BACKGROUND [0002] In extended reality (XR) applications, a scene description is used to combine explicit and easy- to-parse description of a scene structure and some binary representations of media content. [0003] In time-based media streaming, the scene description itself may be time-evolving to provide the relevant virtual content for each sequence of a media stream. For instance, for advertising purposes, a virtual bottle may be displayed on a table during a video sequence in which people sit around the table. [0004] This kind of behavior may be achieved by relying on the framework defined in the document, Information Technology - Coded Representation of Immersive Media - Parti 4: Scene Description for MPEG Media, ISO/IEC DIS 23090-14:2021 (E) ‘MPEG Scene Description"). [0005] Although the MPEG-I Scene Description framework ensures that the timed media and the corresponding relevant virtual content are available at any time, there is no description of how a scene may be updated based on runtime interactivity. SUMMARY [0006] Embodiments described herein include methods that are used in video encoding and decoding (collectively "coding”). [0007] An example method in accordance with some embodiments may include: obtaining scene description data for a three-dimensional (3D) scene, wherein the scene description data includes behavior information including: trigger information describing at least one trigger condition, action information describing an action to perform on a scene element in the 3D scene, and a parameter indicating whether a behavior is to be shared with at least one other device; and responsive to determining that: (i) the parameter indicates that the behavior is to be shared with at least one other user device, (ii) a trigger condition of the at least one trigger condition has occurred, performing a trigger process including: performing the action locally on the scene element; and communicating, to the at least one other device, the behavior information describing the behavior. [0008] For some embodiments of the example method, performing the trigger process is further responsive to determining that the action information indicates that the action is to be performed. [0009] For some embodiments of the example method, the trigger process may further include communicating, to a server, behavior information. [0010] For some embodiments of the example method, the behavior information may include additional information to perform the action. [0011] For some embodiments of the example method, the additional information may include information indicating where a gesture was detected. [0012] For some embodiments of the example method, the action information may include: a list of at least a set action, and a set action information. [0013] For some embodiments of the example method, the action information may include an update action , and performing the action may include updating the scene element based on the update action. [0014] Some embodiments of the example method may further include receiving, from a server, the scene update message. [0015] For some embodiments of the example method, performing the action may include performing a temporary update to the scene element. [0016] Some embodiments of the example method may further include: receiving, from a server, a scene update message; undoing the temporary update to the scene element; and performing a secondary action on the scene element, wherein the secondary action is indicated in the scene update message. [0017] Some embodiments of the example method may further include: receiving, from a server, a scene update message; merging the temporary update with a secondary action to generate a tertiary action; and performing a tertiary action on the scene element, wherein the secondary action is indicated in the scene update message. [0018] For some embodiments of the example method, determining that the trigger condition has occurred may include determining that the trigger condition has produced a true result. [0019] For some embodiments of the example method, determining that the parameter indicates that the behavior is to be shared with at least one other user device may include determining that the parameter indicates a true state. [0020] For some embodiments of the example method, performing the action locally on the scene element is performed before communicating, to the at least one other device, the action information describing the action. [0021] For some embodiments of the example method, the parameter may include a shared parameter identified as "shared” in accordance with the MPEG-I Scene Description framework. [0022