Search

EP-4736461-A1 - AVATAR MESH MASKING

EP4736461A1EP 4736461 A1EP4736461 A1EP 4736461A1EP-4736461-A1

Abstract

Some embodiments of a method may include: obtaining scene description data for a 3D scene, wherein the scene description data comprises: scene element information describing each of a plurality of scene elements in the scene, a node associated with a mesh object, wherein one of the scene elements is an avatar associated with the node, and wherein at least two portions of the avatar are associated with the mesh; a mesh mapping associated with the mesh object, wherein the mesh mapping indicates how to partition the mesh object for the at least two portions of the avatar; and partitioning the mesh object using the mesh mapping for the at least two portions of the avatar. For some embodiments, the method is compatible with an extension of Moving Pictures Expert Group-1 Scene Description (MPEG-1 SD) and/or Graphics Language Transmission Format (glTF) scene description format.

Inventors

  • COVA REGATEIRO, João Pedro
  • GOSSELIN, Philippe Henri
  • LE CLERC, FRANCOIS
  • AVRIL, Quentin

Assignees

  • InterDigital CE Patent Holdings, SAS

Dates

Publication Date
20260506
Application Date
20240626

Claims (20)

  1. 1 . A method comprising: obtaining scene description data for a 3D scene, wherein the scene description data comprises: scene element information describing each of a plurality of scene elements in the scene, a node associated with a mesh object, wherein one of the scene elements is an avatar associated with the node, and wherein at least two portions of the avatar are associated with the mesh object; a mesh mapping associated with the mesh object, wherein the mesh mapping indicates how to partition the mesh object for the at least two portions of the avatar; and partitioning the mesh object using the mesh mapping for the at least two portions of the avatar.
  2. 2. The method of claim 1 , further comprising processing the scene description data associated with the avatar.
  3. 3. The method of claim 2, wherein processing the scene description data associated with the avatar comprises rendering the avatar as part of the scene using the scene description data.
  4. 4. The method of claim 1 , wherein partitioning the mesh object comprises: repeating a process for each of the at least two portions of the avatar associated with the mesh object, wherein the process comprises: obtaining a current portion selected from the at least two portions of the avatar; obtaining a current mesh map associated with the current portion; and masking the current mesh map with the mesh object to obtain current position information for the current portion.
  5. 5. The method of claim 4, wherein obtaining the current mesh map is performed using the mesh mapping.
  6. 6. The method of claim 1 , wherein the method is compatible with an extension of Moving Pictures Expert Group-1 Scene Description(MPEG-1 SD) format.
  7. 7. The method of claim 1 , wherein the method is compatible with an extension of Graphics Language Transmission Format (gITF) scene description format.
  8. 8. The method of claim 1 , wherein rendering the avatar retains the mesh object as a unitary mesh object for each of the at least two portions of the avatar.
  9. 9. The method of claim 1 , wherein the mesh mapping comprises an integer index referencing a data structure.
  10. 10. The method of claim 1 , wherein the mesh mapping comprises: information indicating the node; and information indicating a path to information related to the node.
  11. 11. The method of claim 1 , wherein the mesh mapping comprises a pointer-indexed data structure.
  12. 12. The method of claim 1 , wherein the at least two portions of the avatar comprise nonoverlapping portions of the avatar.
  13. 13. The method of claim 1 , wherein the scene description data further comprises: a second mesh object associated with the node, wherein the at least two portions of the avatar are associated with the second mesh object; and a second mesh mapping associated with the second mesh object, wherein the second mesh mapping indicates how to partition the second mesh object for the at least two portions of the avatar, and wherein the mesh object is different from the second mesh object.
  14. 14 The method of claim 1 , wherein the scene description data further comprises: a third mesh object associated with the node, wherein at least two further portions of the avatar are associated with the third mesh object, and wherein the at least two further portions of the avatar are different than the at least two portions of the avatar; and a third mesh mapping associated with the third mesh object, wherein the third mesh mapping indicates how to partition the third mesh object for the at least two further portions of the avatar.
  15. 15. The method of claim 1 , wherein the at least two portions of the avatar comprise a first portion and second portion of the avatar, wherein the method further comprises: obtaining a first mesh map associated with the first portion of the avatar; and obtaining a second mesh map associated with the second portion of the avatar, wherein the first mesh map shares a common element with the second mesh map.
  16. 16. The method of claim 1 , wherein the at least two portions of the avatar comprise a first portion and second portion of the avatar, wherein the method further comprises: obtaining a first mesh map associated with the first portion of the avatar; and obtaining a second mesh map associated with the second portion of the avatar, wherein the first mesh map is unique compared to the second mesh map.
  17. 17. An apparatus comprising: a processor; and a non-transitory computer-readable medium storing instructions operative, when executed by the processor, to cause the apparatus to perform the method of any one of claims 1 through 16.
  18. 18. A method comprising: performing a mesh masking process to facilitate mesh segmentation of a single, three- dimensional mesh object corresponding to a node hierarchy delineated in scene description data.
  19. 19. The method of claim 18, wherein the mesh object is associated with a non-avatar environment.
  20. 20. The method of claim 18, wherein the mesh object is associated with an avatar environment.

Description

AVATAR MESH MASKING CROSS-REFERENCE TO RELATED APPLICATIONS [0001] The present application claims benefit of European Patent Application No. EP23306091 , entitled “AVATAR MESH MASKING” and filed June 30, 2023, which is hereby incorporated by reference in its entirety. BACKGROUND [0002] This application applies to 3D scene and user representation of interactions within immersive environments. Representations of users such as avatars are in common usage in 3D immersive environments. MPEG-I Scene Description (SD) (23090-14) provides an interactivity framework in support of use of user representations (e.g., avatars) in these environments, and in extended reality (XR) such as virtual reality (VR), augmented reality (AR), and/or mixed reality (MR). SUMMARY [0003] Embodiments described herein include methods that are used in video encoding and decoding (collectively “coding”). [0004] An example method in accordance with some embodiments may include: obtaining scene description data for a 3D scene, wherein the scene description data may include: scene element information describing each of a plurality of scene elements in the scene, a node associated with a mesh object, wherein one of the scene elements is an avatar associated with the node, and wherein at least two portions of the avatar are associated with the mesh object; a mesh mapping associated with the mesh object, wherein the mesh mapping indicates how to partition the mesh object for the at least two portions of the avatar; and partitioning the mesh object using the mesh mapping for the at least two portions of the avatar. [0005] Some embodiments of the example method may further include processing the scene description data associated with the avatar. [0006] For some embodiments of the example method, processing the scene description data associated with the avatar may include rendering the avatar as part of the scene using the scene description data. [0007] For some embodiments of the example method, partitioning the mesh object may include: repeating a process for each of the at least two portions of the avatar associated with the mesh object, wherein the process may include: obtaining a current portion selected from the at least two portions of the avatar; obtaining a current mesh map associated with the current portion; and masking the current mesh map with the mesh object to obtain current position information for the current portion. [0008] For some embodiments of the example method, obtaining the current mesh map may be performed using the mesh mapping. [0009] For some embodiments of the example method, the method may be compatible with an extension of Moving Pictures Expert Group-1 Standard Definition (MPEG-1 SD) scene description format. [0010] For some embodiments of the example method, the method may be compatible with an extension of Graphics Language Transmission Format (gITF) scene description format. [0011] For some embodiments of the example method, rendering the avatar may retain the mesh object as a unitary mesh object for each of the at least two portions of the avatar. [0012] For some embodiments of the example method, the mesh mapping may include an integer index referencing a data structure. [0013] For some embodiments of the example method, the mesh mapping may include: information indicating the node; and information indicating a path to information related to the node. [0014] For some embodiments of the example method, the mesh mapping may include a pointer-indexed data structure. [0015] For some embodiments of the example method, the at least two portions of the avatar may include non-overlapping portions of the avatar. [0016] For some embodiments of the example method, the scene description data may further include: a second mesh object associated with the node, wherein the at least two portions of the avatar are associated with the second mesh object; and a second mesh mapping associated with the second mesh object, wherein the second mesh mapping indicates how to partition the second mesh object for the at least two portions of the avatar, and wherein the mesh object is different from the second mesh object. [0017] For some embodiments of the example method, the scene description data may further include: a third mesh object associated with the node, wherein at least two further portions of the avatar are associated with the third mesh object, and wherein the at least two further portions of the avatar are different than the at least two portions of the avatar; and a third mesh mapping associated with the third mesh object, wherein the third mesh mapping indicates how to partition the third mesh object for the at least two further portions of the avatar. [0018] For some embodiments of the example method, wherein the at least two portions of the avatar may include a first portion and second portion of the avatar, wherein the method may further include: obtaining a first mesh map associated with the first portion of the ava