Search

KR-102962167-B1 - Video encoding/decoding method, device, and method for transmitting a bitstream that determines motion information based on whether interlayer prediction is possible

KR102962167B1KR 102962167 B1KR102962167 B1KR 102962167B1KR-102962167-B1

Abstract

A video encoding/decoding method and apparatus are provided. A video decoding method performed by a video decoding apparatus according to the present disclosure may include the step of determining a motion vector; and the step of decoding the current block based on the motion vector. Here, the motion vector may be determined based on at least one of a first reference picture type of a first reference picture corresponding to the current block and a second reference picture type of a second reference picture corresponding to a layer-to-layer corresponding block.

Inventors

  • 박내리
  • 남정학
  • 장형문
  • 임재현
  • 김승환

Assignees

  • 엘지전자 주식회사

Dates

Publication Date
20260508
Application Date
20210521
Priority Date
20200521

Claims (15)

  1. In a video decoding method performed by a video decoding device, Step of determining motion vectors; and It includes a step of decoding the current block based on the above motion vector, and The above motion vector is determined based on at least one of the first reference picture type of the first reference picture corresponding to the current block and the second reference picture type of the second reference picture corresponding to the inter-layer corresponding block, and The temporal motion candidate is applied when the first reference picture type and the second reference picture type are short-term reference pictures, Video decoding method.
  2. In Article 1, The above first reference picture is restricted to belong to the same layer as the current picture to which the above current block belongs, and An image decoding method in which the above second reference picture is restricted to belonging to the same layer as the picture to which the layer-to-layer corresponding block belongs.
  3. In Article 1, The above motion vector is determined based on temporal motion candidates, and The reference picture type represents any one of the short-term reference picture type, long-term reference picture type, and interlayer reference picture type, and A video decoding method in which, based on the fact that at least one of the first reference picture type and the second reference picture type is an interlayer picture reference picture type, the temporal motion candidate is set to a value indicating that the temporal motion candidate is not used.
  4. In Article 1, The above first reference picture is restricted to belong to a different layer from the current picture to which the above current block belongs, and An image decoding method in which the above second reference picture is restricted to belonging to a different layer from the picture to which the layer-to-layer corresponding block belongs.
  5. In Article 1, The above motion vector is determined based on temporal motion candidates, and The reference picture type represents any one of the short-term reference picture type, long-term reference picture type, and interlayer reference picture type, and A video decoding method in which the temporal motion candidate is set to a value indicating that the temporal motion candidate is not used, based on the fact that the first reference picture type and the second reference picture type have different values.
  6. In Article 1, The above motion vector is determined based on temporal motion candidates, and An image decoding method in which the above temporal motion vector is determined based on whether the above second reference picture type is a reference picture type that references a long-term reference picture of the same layer.
  7. In Article 6, The above motion vector is determined based on temporal motion candidates, and The above-mentioned temporal movement candidate is derived based on the fact that both the above-mentioned first reference picture type and the above-mentioned second reference picture type are not reference picture types that refer to a long-term reference picture of the same layer, wherein A video decoding method in which a temporal motion candidate is induced by applying scaling based on the fact that the corresponding block between the layers and the second reference picture belong to the same layer.
  8. In Article 1, The above motion vector is determined based on temporal motion candidates, and The reference picture type represents any one of the short-term reference picture type, long-term reference picture type, and interlayer reference picture type, and A video decoding method in which the temporal motion candidate is set to a value indicating that the temporal motion candidate is not used, based on the fact that the first reference picture type and the second reference picture type have different values and that neither the first reference picture type nor the second reference picture type is an interlayer picture reference picture.
  9. In Article 1, The above motion vector is determined based on the motion vector offset, and An image decoding method in which the motion vector offset is determined based on whether the first reference picture type is an interlayer reference picture type.
  10. In Article 9, A video decoding method in which whether the first reference picture type is an interlayer reference picture type is identified based on the difference in the picture order count (POC) between the current picture to which the current block belongs and the first reference picture being 0.
  11. In Article 10, A video decoding method in which the value of the motion vector offset for the first reference picture is determined to be a positive value based on the difference in POC between the current picture to which the current block belongs and the first reference picture being 0.
  12. In Article 10, An image decoding method in which the value of the motion vector offset for the first reference picture is determined to be 0 based on the difference in POC between the current picture to which the current block belongs and the first reference picture being 0.
  13. A video decoding device comprising memory and at least one processor, The above-mentioned at least one processor is, Determine the motion vector, Decrypt the current block based on the above motion vector, but, The above motion vector is determined based on at least one of the first reference picture type of the first reference picture corresponding to the current block and the second reference picture type of the second reference picture corresponding to the inter-layer corresponding block, and The temporal motion candidate is applied when the first reference picture type and the second reference picture type are short-term reference pictures, Video decoding device.
  14. In a video encoding method performed by a video encoding device, Step of determining motion vectors; and It includes a step of encoding the current block based on the above motion vector, and The above motion vector is determined based on at least one of the first reference picture type of the first reference picture corresponding to the current block and the second reference picture type of the second reference picture corresponding to the inter-layer corresponding block, and The temporal motion candidate is applied when the first reference picture type and the second reference picture type are short-term reference pictures, Video encoding method.
  15. A method for transmitting a bitstream generated by the image encoding method of claim 14.

Description

Video encoding/decoding method, device, and method for transmitting a bitstream that determines motion information based on whether interlayer prediction is possible The present disclosure relates to an image encoding/decoding method and apparatus, and more specifically, to an image encoding/decoding method and apparatus for determining motion information based on whether a reference picture is an interlayer reference picture, and a method for transmitting a bitstream generated by the image encoding method/apparatus of the present disclosure. Recently, the demand for high-resolution, high-quality video, such as HD (High Definition) and UHD (Ultra High Definition), has been increasing across various fields. As video data becomes higher in resolution and quality, the relative amount of information or bits transmitted increases compared to conventional video data. This increase in transmitted information or bits leads to higher transmission and storage costs. Accordingly, high-efficiency video compression technology is required to effectively transmit, store, and play back high-resolution, high-quality video information. FIG. 1 is a schematic diagram illustrating a video coding system to which an embodiment according to the present disclosure can be applied. FIG. 2 is a schematic diagram illustrating an image encoding device to which an embodiment according to the present disclosure can be applied. FIG. 3 is a schematic diagram illustrating an image decoding device to which an embodiment according to the present disclosure can be applied. FIGS. 4 and FIGS. 5 are drawings illustrating examples of picture decoding and encoding procedures according to one embodiment. FIG. 6 is a diagram illustrating the hierarchical structure of a coded image according to one embodiment. Figures 7 and 8 are diagrams illustrating multi-layer based encoding and decoding. FIGS. 9 to 15 are drawings illustrating a method for deriving movement information according to one embodiment. FIGS. 16 to 32 are drawings illustrating embodiments that induce TMVP. FIGS. 33 to 42 are drawings illustrating embodiments that induce an MMVD offset. FIGS. 43 to 44 are drawings illustrating a decoding method and an encoding method according to one embodiment. FIG. 45 is a drawing illustrating a content streaming system to which an embodiment of the present disclosure can be applied. Hereinafter, embodiments of the present disclosure are described in detail with reference to the attached drawings so that those skilled in the art can easily implement them. However, the present disclosure may be embodied in various different forms and is not limited to the embodiments described herein. In describing the embodiments of the present disclosure, detailed descriptions of known configurations or functions are omitted if it is determined that such descriptions could obscure the essence of the present disclosure. Additionally, parts of the drawings unrelated to the description of the present disclosure have been omitted, and similar parts are denoted by similar reference numerals. In the present disclosure, when a component is described as being "connected," "combined," or "joined" with another component, this may include not only a direct connection but also an indirect connection in which another component exists in between. Furthermore, when a component is described as "comprising" or "having" another component, this means that, unless specifically stated otherwise, it does not exclude the other component but may include an additional component. In the present disclosure, terms such as first, second, etc. are used solely for the purpose of distinguishing one component from another and do not limit the order or importance of the components unless specifically stated otherwise. Accordingly, within the scope of the present disclosure, a first component in one embodiment may be referred to as a second component in another embodiment, and likewise, a second component in one embodiment may be referred to as a first component in another embodiment. In this disclosure, distinct components are intended to clearly describe their respective features and do not imply that the components are separate. That is, multiple components may be integrated to form a single hardware or software unit, or a single component may be distributed to form multiple hardware or software units. Accordingly, such integrated or distributed embodiments are included within the scope of this disclosure, unless otherwise noted. In the present disclosure, the components described in various embodiments do not necessarily mean essential components, and some may be optional components. Accordingly, embodiments consisting of a subset of the components described in one embodiment are also included within the scope of the present disclosure. Furthermore, embodiments including other components in addition to the components described in various embodiments are also included within the scope of the present