US-12627833-B2 - Method of deriving motion vector information for a coding block, and device for deriving motion vector information for a coding block
Abstract
Systems and methods for performing motion vector prediction for video coding are disclosed. A motion vector predictor is determined based at least in part on motion information associated with a selected motion vector predictor origin and offset values corresponding to a selected sampling point. The sampling point is specified according to a set of direction and distance on a sampling map for the motion vector predictor origin.
Inventors
- Byeongdoo CHOI
- Kiran Mukesh Misra
- Jie Zhao
- Philip Cowan
- Weijia Zhu
- Sachin G. Deshpande
- Frank Bossen
- Christopher Andrew Segall
Assignees
- SHARP KABUSHIKI KAISHA
Dates
- Publication Date
- 20260512
- Application Date
- 20241021
Claims (4)
- 1 . A method of deriving motion vector information for a coding block, the method including: receiving a flag syntax element in a sequence parameter set, wherein the flag syntax element indicates (i) whether a motion distance syntax element is present and (ii) whether a mode flag is present in a coding unit level; receiving the motion distance syntax element, wherein the motion distance syntax element is used to derive a motion distance; receiving the mode flag in the coding unit level, wherein the mode flag indicates whether an index syntax element, a distance index syntax element, and a direction syntax element are present in the coding unit level; receiving the index syntax element, wherein the index syntax element is used for indicating a predetermined motion vector predictor candidate in a candidate set and used for deriving the motion vector information; receiving the distance index syntax element, wherein the distance syntax element is used to derive a distance in a set of distances; and receiving the direction syntax element, wherein the distance syntax element is used to derive directions including a negative X direction, a positive X direction, a positive Y direction, or a negative Y direction, wherein the distance specified based on the distance index syntax element equal to a first value results in different values of the motion distance according to a value of the motion distance syntax element.
- 2 . A device for deriving motion vector information for a coding block, the device comprising: a processor, and a memory associated with the processor; wherein the processor is configured to: receive a flag syntax element in a sequence parameter set, wherein the flag syntax element indicates (i) whether a motion distance syntax element is present, and (ii) whether a mode flag is present in a coding unit level; receive the motion distance syntax element, wherein the motion distance syntax element is used to derive a motion distance; receive the mode flag in the coding unit level, wherein the mode flag indicates whether an index syntax element, a distance index syntax element, and a direction syntax element are present in the coding unit level; receive the index syntax element, wherein the index syntax element is used for indicating a predetermined motion vector predictor candidate in a candidate set and used for deriving the motion vector information; receive the distance index syntax element, wherein the distance syntax element is used to derive a distance in a set of distances; and receive the direction syntax element, wherein the distance syntax element is used to derive directions including a negative X direction, a positive X direction, a positive Y direction, or a negative Y direction, wherein the distance specified based on the distance index syntax element equal to a first value results in different values of the motion distance according to a value of the motion distance syntax element.
- 3 . The device of claim 2 , wherein the processor is configured to: derive the candidate set for a merge mode, wherein the candidate set includes a first motion vector predictor candidate at a first position and a second motion vector predictor candidate at a second position; derive an offset value by using the direction syntax element and the motion distance; and derive the motion vector information by modifying the motion vector by using the offset value.
- 4 . A device for deriving motion vector information for a coding block, the device comprising: a processor, and a memory associated with the processor; wherein the processor is configured to: signal a flag syntax element in a sequence parameter set, wherein the flag syntax element indicates (i) whether a motion distance syntax element is present, and (ii) whether a mode flag is present in a coding unit level; signal the motion distance syntax element, wherein the motion distance syntax element is used to derive a motion distance; signal the mode flag in the coding unit level, wherein the mode flag indicates whether an index syntax element, a distance index syntax element, and a direction syntax element are present in the coding unit level; signal the index syntax element, wherein the index syntax element is used for indicating a predetermined motion vector predictor candidate in a candidate set and used for deriving the motion vector information; signal the distance index syntax element, wherein the distance syntax element is used to derive a distance in a set of distances; and signal the direction syntax element, wherein the distance syntax element is used to derive directions including a negative X direction, a positive X direction, a positive Y direction, or a negative Y direction, wherein the distance specified based on the distance index syntax element equal to a first value results in different values of the motion distance according to a value of the motion distance syntax element.
Description
CROSS REFERENCE This Nonprovisional application claims priority under 35 U.S.C. § 119 on provisional Application No. 62/624,005 on Jan. 30, 2018, No. 62/625,825 on Feb. 2, 2018, the entire contents of which are hereby incorporated by reference. TECHNICAL FIELD This disclosure relates to video coding and more particularly to techniques for performing motion vector prediction. BACKGROUND ART Digital video capabilities can be incorporated into a wide range of devices, including digital televisions, laptop or desktop computers, tablet computers, digital recording devices, digital media players, video gaming devices, cellular telephones, including so-called smartphones, medical imaging devices, and the like. Digital video may be coded according to a video coding standard. Video coding standards may incorporate video compression techniques. Examples of video coding standards include ISO/IEC MPEG-4 Visual and ITU-T H.264 (also known as ISO/IEC MPEG-4 AVC) and High-Efficiency Video Coding (HEVC). HEVC is described in High Efficiency Video Coding (HEVC), Rec. ITU-T H.265, December 2016, which is incorporated by reference, and referred to herein as ITU-T H.265. Extensions and improvements for ITU-T H.265 are currently being considered for the development of next generation video coding standards. For example, the ITU-T Video Coding Experts Group (VCEG) and ISO/IEC (Moving Picture Experts Group (MPEG) (collectively referred to as the Joint Video Exploration Team (JVET)) are studying the potential need for standardization of future video coding technology with a compression capability that significantly exceeds that of the current HEVC standard. The Joint Exploration Model 7 (JEM 7), Algorithm Description of Joint Exploration Test Model 7 (JEM 7), ISO/IEC JTC1/SC29/WG11 Document: JVET-G1001, July 2017, Torino, IT, which is incorporated by reference herein, describes the coding features that are under coordinated test model study by the JVET potentially enhancing video coding as technology beyond the capabilities of ITU-T H.265. It should be noted that the coding features of JEM 7 are implemented in JEM reference software. As used herein, the term JEM may collectively refer to algorithms included in JEM 7 and implementations of JEM reference software. Video compression techniques enable data requirements for storing and transmitting video data to be reduced. Video compression techniques may reduce data requirements by exploiting the inherent redundancies in a video sequence. Video compression techniques may sub-divide a video sequence into successively smaller portions (i.e., groups of frames within a video sequence, a frame within a group of frames, slices within a frame, coding tree units (e.g., macroblocks) within a slice, coding blocks within a coding tree unit, etc.). Intra prediction coding techniques (e.g., intra-picture (spatial)) and inter prediction techniques (i.e., inter-picture (temporal)) may be used to generate difference values between a unit of video data to be coded and a reference unit of video data. The difference values may be referred to as residual data. Residual data may be coded as quantized transform coefficients. Syntax elements may relate residual data and a reference coding unit (e.g., intra-prediction mode indices, motion vectors, and block vectors). Residual data and syntax elements may be entropy coded. Entropy encoded residual data and syntax elements may be included in a compliant bitstream. SUMMARY OF INVENTION In one example, a method of reconstructing video data, comprising determining a selected motion vector predictor origin for a current video block, determining a sampling map for the motion vector predictor origin, deriving offset values corresponding to sampling points on the sampling map, determining a selected sampling point, determining a motion vector predictor based at least in part on motion information associated with the selected motion vector predictor origin and the offset values corresponding to the selected sampling point, and generating a prediction for the current video block using the determined motion vector predictor. In one example, a method of encoding video data, comprises selecting a motion vector predictor origin for a current video block, selecting a sampling map for the motion vector predictor origin, deriving offset values corresponding to sampling points on the sampling map, selecting a sampling point, and signaling the selected motion vector predictor origin, the selected sampling map, and the selected sampling point motion vector predictor. BRIEF DESCRIPTION OF DRAWINGS FIG. 1 is a conceptual diagram illustrating an example of a group of pictures coded according to a quad tree binary tree partitioning in accordance with one or more techniques of this disclosure. FIG. 2 is a conceptual diagram illustrating an example of a video component sampling format in accordance with one or more techniques of this disclosure. FIG. 3 is a conceptual diagram illustr