Search

EP-4742661-A2 - METHOD FOR ENCODING/DECODING IMAGE SIGNAL AND DEVICE THEREFOR

EP4742661A2EP 4742661 A2EP4742661 A2EP 4742661A2EP-4742661-A2

Abstract

An image decoding method according to the present invention may comprise the steps of: dividing a coding block into a first prediction unit and a second prediction unit; deriving a merge candidate list for the coding block; deriving first motion information for the first prediction unit and second motion information for the second prediction unit by means of the merge candidate list; and on the basis of the first motion information and the second motion information, acquiring a prediction sample within the coding block.

Inventors

  • LEE, BAE KEUN

Assignees

  • GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., LTD.

Dates

Publication Date
20260513
Application Date
20191108

Claims (17)

  1. A video decoding method comprising: determining a first prediction unit and a second prediction unit of a coding block; deriving a merge candidate list for the coding block; deriving first motion information for the first prediction unit from a first merge candidate in the merge candidate list, and deriving second motion information for the second prediction unit from a second merge candidate in the merge candidate list; and determining a prediction sample in the coding block based on a weighted sum of a first prediction sample derived based on the first motion information and a second prediction sample derived based on the second motion information; wherein first index information for determining the first merge candidate and second index information for determining the second merge candidate are obtained by decoding from a bitstream, and when a value of the second index information is equal to or greater than a value of the first index information, the second merge candidate is determined as a merge candidate indicated by an index obtained by adding 1 to the value of the second index information.
  2. The method according to claim 1, wherein when the value of the second index information is less than the value of the first index information, the second merge candidate is determined as a merge candidate indicated by the second index information.
  3. The method according to claim 1, wherein a first weighting value applied to the first prediction sample is determined based on an x-axis coordinate and a y-axis coordinate of the prediction sample.
  4. The method according to claim 3, wherein a second weighting value applied to the second prediction sample is derived by subtracting the first weighting value from a constant value.
  5. The method according to claim 1, wherein the determining the first prediction unit and the second prediction unit of the coding block comprising: applying partitioning to the coding block to determine the first prediction unit and the second prediction unit.
  6. The method according to claim 5, wherein the position for partitioning the coding block is determined by information decoded from the bitstream.
  7. The method according to claim 5, wherein whether or not to apply partitioning to the coding block to determine the first prediction unit and the second prediction unit is determined based on conditions at least comprising a slice type, the size of the coding block, the shape of the coding block and the prediction mode of the coding block.
  8. A video encoding method comprising: determining a first prediction unit and a second prediction unit of a coding block; deriving a merge candidate list for the coding block; deriving first motion information for the first prediction unit from a first merge candidate in the merge candidate list, and deriving second motion information for the second prediction unit from a second merge candidate in the merge candidate list; determining a prediction sample in the coding block based on a weighted sum of a first prediction sample derived based on the first motion information and a second prediction sample derived based on the second motion information; and encoding first index information for determining the first merge candidate and second index information for determining the second merge candidate into a bitstream, and when an index of the second merge candidate is equal to or greater than an index of the first merge candidate, the second index information is encoded using a value obtained by subtracting 1 to the index of the second merge candidate.
  9. The method according to claim 8, wherein when the index of the second merge candidate is less than the index of the first merge candidate, the second index information is encoded using a value equal to the index of the second merge candidate.
  10. The method according to claim 8, wherein a first weighting value applied to the first prediction sample is determined based on an x-axis coordinate and a y-axis coordinate of the prediction sample.
  11. The method according to claim 10, wherein a second weighting value applied to the second prediction sample is derived by subtracting the first weighting value from a constant value.
  12. The method according to claim 8, wherein the determining the first prediction unit and the second prediction unit of the coding block comprising: applying partitioning to the coding block to determine the first prediction unit and the second prediction unit.
  13. The method according to claim 12, wherein information indicating the position for partitioning the coding block is encoded into the bitstream.
  14. The method according to claim 12, wherein whether or not to apply partitioning to the coding block to determine the first prediction unit and the second prediction unit is determined based on conditions at least comprising a slice type, the size of the coding block, the shape of the coding block and the prediction mode of the coding block.
  15. A video decoding apparatus comprising an inter prediction part configured to execute the method of any of claims 1 to 7.
  16. A video encoding apparatus, comprising an inter prediction part, configured to execute the method of any of claims 8 to 14.
  17. A computer readable storage medium, storing a computer program and a bitstream, wherein the computer program, when executed by a processor, causes the processor to implement the method according to any of claims 8 to 14 to generate the bitstream.

Description

This is a divisional application of EP Patent Application No. 23205634.1, filed on November 8, 2019 and entitled "METHOD FOR ENCODING/DECODING IMAGE SIGNAL AND DEVICE THEREFOR". TECHNICAL FIELD The present disclosure relates to a video signal encoding and decoding method and an apparatus therefor. BACKGROUND As display panels are getting bigger and bigger, video services of further higher quality are required more and more. The biggest problem of high-definition video services is significant increase in data volume, and to solve this problem, studies for improving the video compression rate are actively conducted. As a representative example, the Motion Picture Experts Group (MPEG) and the Video Coding Experts Group (VCEG) under the International Telecommunication Union-Telecommunication (ITU-T) have formed the Joint Collaborative Team on Video Coding (JCT-VC) in 2009. The JCT-VC has proposed High Efficiency Video Coding (HEVC), which is a video compression standard having a compression performance about twice as high as the compression performance of H.264/AVC, and it is approved as a standard on January 25, 2013. With rapid advancement in the high-definition video services, performance of the HEVC gradually reveals its limitations. US2013202038A1 discloses a video coding device that generates a motion vector (MV) candidate list for a prediction unit (PU) of a coding unit (CU) that is partitioned into four equally-sized PUs. The video coding device converts a bi-directional MV candidate in the MV candidate list into a uni-directional MV candidate. In addition, the video coding device determines a selected MV candidate in the merge candidate list and generates a predictive video block for the PU based at least in part on one or more reference blocks indicated by motion information specified by the selected MV candidate. US2013114717A1 discloses that, in generating a candidate list for inter prediction video coding, a video coder can perform pruning operations when adding spatial candidates and temporal candidates to a candidate list while not performing pruning operations when adding an artificially generated candidate to the candidate list. The artificially generated candidate can have motion information that is the same as motion information of a spatial candidate or temporal candidate already in the candidate list. SUMMARY The invention is defined in the independent claims. Further aspects of the invention are defined in the dependent claims, the drawings and the following description. An object of the present disclosure is to provide a method of applying partitioning to a coding block to obtain a plurality of prediction blocks in encoding/decoding a video signal, and an apparatus for performing the method. Another object of the present disclosure is to provide a method of deriving motion information of each of a plurality of prediction blocks, in encoding/decoding a video signal. Another object of the present disclosure is to provide a method of deriving a merge candidate using an inter-region motion information list, in encoding/decoding a video signal. The technical problems to be achieved in the present disclosure are not limited to the technical problems mentioned above, and unmentioned other problems may be clearly understood by those skilled in the art from the following description. A method of decoding/encoding a video signal according to the present disclosure may include the steps of: applying partitioning to a coding block to obtain a first prediction unit and a second prediction unit; deriving a merge candidate list for the coding block; deriving first motion information for the first prediction unit and second motion information for the second prediction unit using the merge candidate list; and obtaining a prediction sample in the coding block based on the first motion information and the second motion information. At this point, whether or not to apply partitioning to the coding block is determined based on a size of the coding block, and the first motion information for the first prediction unit is derived from a first merge candidate in the merge candidate list, and the second motion information for the second prediction unit is derived from a second merge candidate different from the first merge candidate. In the video signal encoding and decoding method according to the present disclosure, when at least one among a width and a height of the coding block is greater than a threshold value, partitioning of the coding block may not be not allowed. In the video signal encoding and decoding method according to the present disclosure, the method may further include the step of decoding first index information for specifying the first merge candidate and second index information for specifying the second merge candidate from a bitstream, and when a value of the second index information is equal to or greater than a value of the first index information, the value of the second index information