US-12627840-B2 - Systems and methods for signaling sublayer non-reference information in video coding
Abstract
A device may be configured to signal sublayer non-reference information for coded video according to one or more of the techniques described herein.
Inventors
- Sachin G. Deshpande
Assignees
- SHARP KABUSHIKI KAISHA
Dates
- Publication Date
- 20260512
- Application Date
- 20230302
Claims (5)
- 1 . A method of signaling parameters for video data, the method comprising: signaling a sublayer non-reference supplemental enhancement information (SEI) message that includes a syntax element, wherein: the syntax element having a value of 1 specifies that a picture is a sublayer non-reference picture, and the syntax element having a value of 0 specifies that a picture is a sublayer reference picture.
- 2 . A method of decoding video data, the method comprising: receiving a sublayer non-reference supplemental enhancement information (SEI) message; and parsing a syntax element in the sublayer non-reference SEI message, wherein: the syntax element having a value of 1 specifies that a picture is a sublayer non-reference picture, and the syntax element having a value of 0 specifies that a picture is a sublayer reference picture.
- 3 . A device comprising: one or more processors configured to: receive a sublayer non-reference supplemental enhancement information (SEI) message, and parse a syntax element in the sublayer non-reference SEI message, wherein: the syntax element having a value of 1 specifies that a picture is a sublayer non-reference picture, and the syntax element having a value of 0 specifies that a picture is a sublayer reference picture.
- 4 . The device of claim 3 , wherein the device includes a video decoder.
- 5 . A non-transitory computer-readable storage medium coupled to one or more processors of a device and storing one or more computer-executable instructions that, when executed by the one or more processors, cause the device to: receive a sublayer non-reference supplemental enhancement information (SEI) message; and parse a syntax element in the sublayer non-reference SEI message, wherein: the syntax element having a value of 1 specifies that a picture is a sublayer non-reference picture, and the syntax element having a value of 0 specifies that the picture is a sublayer reference picture.
Description
CROSS REFERENCE This Nonprovisional application claims priority under 35 U.S.C. § 119 on provisional Application No. 63/317,830 on Mar. 8, 2022, the entire contents of which are hereby incorporated by reference. TECHNICAL FIELD This disclosure relates to video coding and more particularly to techniques for signaling sublayer non-reference information for coded video. BACKGROUND ART Digital video capabilities can be incorporated into a wide range of devices, including digital televisions, laptop or desktop computers, tablet computers, digital recording devices, digital media players, video gaming devices, cellular telephones, including so-called smartphones, medical imaging devices, and the like. Digital video may be coded according to a video coding standard. Video coding standards define the format of a compliant bitstream encapsulating coded video data. A compliant bitstream is a data structure that may be received and decoded by a video decoding device to generate reconstructed video data. Video coding standards may incorporate video compression techniques. Examples of video coding standards include ISO/IEC MPEG-4 Visual and ITU-T H.264 (also known as ISO/IEC MPEG-4 AVC) and High-Efficiency Video Coding (HEVC). HEVC is described in High Efficiency Video Coding (HEVC), Rec. ITU-T H.265, December 2016, which is incorporated by reference, and referred to herein as ITU-T H.265. Extensions and improvements for ITU-T H.265 are currently being considered for the development of next generation video coding standards. For example, the ITU-T Video Coding Experts Group (VCEG) and ISO/IEC (Moving Picture Experts Group (MPEG) (collectively referred to as the Joint Video Exploration Team (JVET)) are working to standardized video coding technology with a compression capability that significantly exceeds that of the current HEVC standard. The Joint Exploration Model 7 (JEM 7), Algorithm Description of Joint Exploration Test Model 7 (JEM 7), ISO/IEC JTC1/SC29/WG11 Document: JVET-G1001, July 2017, Torino, IT, which is incorporated by reference herein, describes the coding features that were under coordinated test model study by the JVET as potentially enhancing video coding technology beyond the capabilities of ITU-T H.265. It should be noted that the coding features of JEM 7 are implemented in JEM reference software. As used herein, the term JEM may collectively refer to algorithms included in JEM 7 and implementations of JEM reference software. Further, in response to a “Joint Call for Proposals on Video Compression with Capabilities beyond HEVC,” jointly issued by VCEG and MPEG, multiple descriptions of video coding tools were proposed by various groups at the 10th Meeting of ISO/IEC JTC1/SC29/WG11 16-20 Apr. 2018, San Diego, CA. From the multiple descriptions of video coding tools, a resulting initial draft text of a video coding specification is described in “Versatile Video Coding (Draft 1),” 10th Meeting of ISO/IEC JTC1/SC29/WG11 16-20 Apr. 2018, San Diego, CA, document JVET-J1001-v2, which is incorporated by reference herein, and referred to as JVET-J1001. The current development of a next generation video coding standard by the VCEG and MPEG is referred to as the Versatile Video Coding (VVC) project. “Versatile Video Coding (Draft 10),” 20th Meeting of ISO/IEC JTC1/SC29/WG11 7-16 Oct. 2020, Teleconference, document JVET-T2001-v2, which is incorporated by reference herein, and referred to as JVET-T2001, represents the current iteration of the draft text of a video coding specification corresponding to the VVC project. Video compression techniques enable data requirements for storing and transmitting video data to be reduced. Video compression techniques may reduce data requirements by exploiting the inherent redundancies in a video sequence. Video compression techniques may sub-divide a video sequence into successively smaller portions (i.e., groups of pictures within a video sequence, a picture within a group of pictures, regions within a picture, sub-regions within regions, etc.). Intra prediction coding techniques (e.g., spatial prediction techniques within a picture) and inter prediction techniques (i.e., inter-picture techniques (temporal)) may be used to generate difference values between a unit of video data to be coded and a reference unit of video data. The difference values may be referred to as residual data. Residual data may be coded as quantized transform coefficients. Syntax elements may relate residual data and a reference coding unit (e.g., intra-prediction mode indices, and motion information). Residual data and syntax elements may be entropy coded. Entropy encoded residual data and syntax elements may be included in data structures forming a compliant bitstream. SUMMARY OF INVENTION In one example, a method of signaling parameters for video data comprises signaling a sublayer non-reference picture message and setting a value of a syntax element in the sublayer non-reference picture message indic