Search

JP-2022533225-A5 -

JP2022533225A5JP 2022533225 A5JP2022533225 A5JP 2022533225A5JP-2022533225-A5

Dates

Publication Date
20230522
Application Date
20200521

Description

[0130] While features and elements have been described above in particular in combination, those skilled in the art will understand that each feature or element may be used alone or in any combination with other features and elements. In addition, the methods described herein may be implemented in computer programs, software, or firmware incorporated into computer-readable media for execution by a computer or processor. Examples of computer-readable media include electronic signals (transmitted over wired or wireless connections) and computer-readable storage media. Examples of computer-readable storage media include, but are not limited to, read-only memory (ROM), random access memory (RAM), registers, cache memory, semiconductor memory devices, magnetic media such as internal hard disks and removable disks, magneto-optical media, optical media such as CD-ROM disks, and digital multipurpose disks (DVDs). Processors related to software may be used to implement radio frequency transceivers used in WTRUs, UEs, terminals, base stations, RNCs, or any host computer. [Note 1] A video decoding device configured to process video data associated with a three-dimensional (3D) space, Receive media container files, The medial container file is parsed in order to determine the region identifier (ID) of the 3D region in the 3D space and the respective track group IDs of one or more track groups associated with the 3D space. Based on the determination that each of the one or more track groups' track group IDs is linked to the area ID of the 3D area, it is determined that the one or more track groups are associated with the 3D area. A video decoding device including a processor configured to decode video tracks belonging to one or more track groups in order to render a visual representation of the 3D region of the 3D space. [Note 2] The video decoding apparatus according to Appendix 1, wherein the one or more track groups share a common track group type, and based on the determination that the one or more track groups share the common track group type, the one or more track groups are determined to be associated with the 3D region. [Note 3] The video decoding apparatus according to Appendix 1, wherein the media container file includes a structure that defines the number of regions associated with the 3D space and the number of track groups associated with each of the regions, and the processor is configured to determine, based on the information contained in the structure, that each of the track group IDs of the one or more track groups is linked to the region ID of the 3D region. [Note 4] The video decoding device according to Appendix 1, wherein the medial container file includes time-limited metadata indicating an update to at least one characteristic of the 3D region. [Note 5] The video decoding device according to Appendix 4, wherein the processor is configured to determine, based on the time-limited metadata, that each of the track group IDs of the one or more track groups is linked to the area ID of the 3D area. [Note 6] The video decoding device according to Appendix 4, wherein the 3D space includes multiple regions, and the time-limited metadata includes information associated with an updated subset of the regions. [Note 7] The video decoding apparatus according to Appendix 1, wherein the processor is further configured to determine the 3D region and reference points associated with the dimensions of the 3D region based on the medial container file. [Note 8] The video decoding apparatus according to Appendix 1, wherein the video tracks belonging to the one or more track groups correspond to one or more tiles in a two-dimensional (2D) frame. [Note 9] The video decoding apparatus according to Appendix 1, wherein the video track includes one or more sample entries, each of which includes an indication of the length of a data field indicating the network extraction layer (NAL) unit size. [Note 10] The video decoding apparatus according to Appendix 9, wherein each of the one or more sample entries further includes an indication of the number of V-PCC parameter sets associated with the sample entry or the number of arrays of Atlas NAL units associated with the sample entry. [Note 11] A method for decoding video data associated with a three-dimensional (3D) space, Receiving media container files and, The medial container file is parsed in order to determine the region identifier (ID) of the 3D region in the 3D space and the respective track group IDs of one or more track groups associated with the 3D space. Based on the determination that each of the one or more track groups' track group IDs is linked to the area ID of the 3D area, it is determined that the one or more track groups are associated with the 3D area, In order to render a visual representation of the 3D region of the 3D space, the video tracks belonging to one or more track groups are decoded. A method that includes this. [Note 12] The method according to Ap