Search

US-12621480-B2 - Encoder, decoder, encoding method, and decoding method

US12621480B2US 12621480 B2US12621480 B2US 12621480B2US-12621480-B2

Abstract

A decoder that decodes a current block using a motion vector includes: a processor; and memory. Using the memory, the processor: derives a first candidate vector from one or more candidate vectors of one or more neighboring blocks that neighbor the current block; determines, in a first reference picture for the current block, a first adjacent region that includes a position indicated by the first candidate vector; calculates evaluation values of a plurality of candidate regions included in the first adjacent region; and determines a first motion vector of the current block, based on a first candidate region having a smallest evaluation value among the evaluation values. The first adjacent region is included in a first motion estimation region determined based on the position indicated by the first candidate vector.

Inventors

  • Takashi Hashimoto
  • Takahiro Nishi
  • Tadamasa Toma
  • Kiyofumi Abe
  • Ryuichi KANOH

Assignees

  • PANASONIC INTELLECTUAL PROPERTY CORPORATION OF AMERICA

Dates

Publication Date
20260505
Application Date
20241030
Priority Date
20170428

Claims (4)

  1. 1 . An encoder that encodes a current block using a motion vector, the encoder comprising: a processor; and memory, wherein using the memory, the processor: derives a representative motion vector indicating a representative position based on motion vector candidates included in a merge candidate list, the merge candidate list having candidates derived from motion vectors of blocks that spatially or temporally neighbor a current block; determines, in a first reference picture for the current block, a first motion estimation region that includes the representative position indicated by the representative motion vector; calculates first evaluation values of a plurality of candidate regions included in the first motion estimation region, the first evaluation values being differences between (i) the plurality of candidate regions and (ii) regions being along a motion trajectory of the current block in a second reference picture which is different from the first reference picture; determines a first adjacent region included in the first motion estimation region, the first adjacent region including a first candidate region and a vicinity of the first candidate region, the first candidate region being one of the plurality of the candidate regions having a smallest first evaluation value among the plurality of candidate regions included in the first motion estimation region; determines the motion vector of the current block using a smallest second evaluation value among a plurality of regions included in the first adjacent region; generates reference image of the current block based on the motion vector; and generates a bitstream which includes information indicating that a mode for motion estimation is applied.
  2. 2 . A decoder that decodes a current block using a motion vector, the decoder comprising: a processor; and memory, wherein using the memory, the processor: derives a representative motion vector indicating a representative position based on motion vector candidates included in a merge candidate list, the merge candidate list having candidates derived from motion vectors of blocks that spatially or temporally neighbor a current block; determines, in a first reference picture for the current block, a first motion estimation region that includes the representative position indicated by the representative motion vector; calculates first evaluation values of a plurality of candidate regions included in the first motion estimation region, the first evaluation values being differences between (i) the plurality of candidate regions and (ii) regions being along a motion trajectory of the current block in a second reference picture which is different from the first reference picture; determines a first adjacent region included in the first motion estimation region, the first adjacent region including a first candidate region and a vicinity of the first candidate region, the first candidate region being one of the plurality of the candidate regions having a smallest first evaluation value among the plurality of candidate regions included in the first motion estimation region; determines the motion vector of the current block using a smallest second evaluation value among a plurality of regions included in the first adjacent region; and generates reference image of the current block based on the motion vector.
  3. 3 . An encoding method for encoding a current block using a motion vector, the encoding method comprising: deriving a representative motion vector indicating a representative position based on motion vector candidates included in a merge candidate list, the merge candidate list having candidates derived from motion vectors of blocks that spatially or temporally neighbor a current block; determining, in a first reference picture for the current block, a first motion estimation region that includes the representative position indicated by the representative motion vector; calculating first evaluation values of a plurality of candidate regions included in the first motion estimation region, the first evaluation values being differences between (i) the plurality of candidate regions and (ii) regions being along a motion trajectory of the current block in a second reference picture which is different from the first reference picture; determining a first adjacent region included in the first motion estimation region, the first adjacent region including a first candidate region and a vicinity of the first candidate region, the first candidate region being one of the plurality of the candidate regions having a smallest first evaluation value among the plurality of candidate regions included in the first motion estimation region; determining the motion vector of the current block using a smallest second evaluation value among a plurality of regions included in the first adjacent region; generating reference image of the current block based on the motion vector; and generating a bitstream which includes information indicating that a mode for motion estimation is applied.
  4. 4 . A decoding method for decoding a current block using a motion vector, the decoding method comprising: deriving a representative motion vector indicating a representative position based on motion vector candidates included in a merge candidate list, the merge candidate list having candidates derived from motion vectors of blocks that spatially or temporally neighbor a current block; determining, in a first reference picture for the current block, a first motion estimation region that includes the representative position indicated by the representative motion vector; calculating first evaluation values of a plurality of candidate regions included in the first motion estimation region, the first evaluation values being differences between (i) the plurality of candidate regions and (ii) regions being along a motion trajectory of the current block in a second reference picture which is different from the first reference picture; determining a first adjacent region included in the first motion estimation region, the first adjacent region including a first candidate region and a vicinity of the first candidate region, the first candidate region being one of the plurality of the candidate regions having a smallest first evaluation value among the plurality of candidate regions included in the first motion estimation region; determining the motion vector of the current block using a smallest second evaluation value among a plurality of regions included in the first adjacent region; and generating reference image of the current block based on the motion vector.

Description

CROSS REFERENCE TO RELATED APPLICATIONS This application is a continuation of U.S. application Ser. No. 18/390,148 filed on Dec. 20, 2023, which is a continuation of U.S. application Ser. No. 18/125,816, now U.S. Pat. No. 11,895,316, filed on Mar. 24, 2023, which is a continuation of U.S. application Ser. No. 17/865,659, now U.S. Pat. No. 11,653,018, filed on Jul. 15, 2022, which is a continuation of U.S. application Ser. No. 17/130,298, now U.S. Pat. No. 11,425,409, filed on Dec. 22, 2020, which is a continuation of U.S. application Ser. No. 16/597,356, now U.S. Pat. No. 10,911,770, filed on Oct. 9, 2019, which is a continuation of PCT International Patent Application Number PCT/JP2018/014363 filed on Apr. 4, 2018, claiming the benefit of priority of U.S. Provisional Application No. 62/485,072 filed on Apr. 13, 2017 and Japanese Patent Application No. 2017-090685 filed Apr. 28, 2017. The entire disclosures of the above-identified applications, including the specifications, drawings, and claims are incorporated herein by reference in their entirety. BACKGROUND 1. Technical Field The present disclosure relates to an encoder, a decoder, an encoding method, and a decoding method. 2. Description of the Related Art The video coding standard called High-Efficiency Video Coding (HEVC) is standardized by the Joint Collaborative Team on Video Coding (JCT-VC). SUMMARY A decoder according to an aspect of the present disclosure is a decoder that decodes a current block using a motion vector, the decoder including: a processor; and memory. Using the memory, the processor: derives a first candidate vector from one or more candidate vectors of one or more neighboring blocks that neighbor the current block; determines, in a first reference picture for the current block, a first adjacent region that includes a position indicated by the first candidate vector; calculates evaluation values of a plurality of candidate regions included in the first adjacent region; and determines a first motion vector of the current block, based on a first candidate region having a smallest evaluation value among the evaluation values. The first adjacent region is included in a first motion estimation region determined based on the position indicated by the first candidate vector. Note that these general and specific aspects may be implemented using a system, a method, an integrated circuit, a computer program, a computer-readable recording medium such as a CD-ROM, or any combination of systems, methods, integrated circuits, computer programs or recording media. BRIEF DESCRIPTION OF DRAWINGS These and other objects, advantages and features of the disclosure will become apparent from the following description thereof taken in conjunction with the accompanying drawings that illustrate a specific embodiment of the present disclosure. FIG. 1 is a block diagram illustrating a functional configuration of an encoder according to Embodiment 1; FIG. 2 illustrates one example of block splitting according to Embodiment 1; FIG. 3 is a chart indicating transform basis functions for each transform type; FIG. 4A illustrates one example of a filter shape used in ALF; FIG. 4B illustrates another example of a filter shape used in ALF; FIG. 4C illustrates another example of a filter shape used in ALF; FIG. 5A illustrates 67 intra prediction modes used in intra prediction; FIG. 5B is a flow chart for illustrating an outline of a prediction image correction process performed via OBMC processing; FIG. 5C is a conceptual diagram for illustrating an outline of a prediction image correction process performed via OBMC processing; FIG. 5D illustrates one example of FRUC; FIG. 6 is for illustrating pattern matching (bilateral matching) between two blocks along a motion trajectory; FIG. 7 is for illustrating pattern matching (template matching) between a template in the current picture and a block in a reference picture; FIG. 8 is for illustrating a model assuming uniform linear motion; FIG. 9A is for illustrating deriving a motion vector of each sub-block based on motion vectors of neighboring blocks; FIG. 9B is for illustrating an outline of a process for deriving a motion vector via merge mode; FIG. 9C is a conceptual diagram for illustrating an outline of DMVR processing; FIG. 9D is for illustrating an outline of a prediction image generation method using a luminance correction process performed via LIC processing; FIG. 10 is a block diagram illustrating a functional configuration of a decoder according to Embodiment 1; FIG. 11 is a block diagram illustrating an internal configuration of an inter predictor of the encoder according to Embodiment 1; FIG. 12 illustrates examples of positions of motion estimation region information in bitstreams in Embodiment 1; FIG. 13 is a flowchart illustrating processing performed by inter predictors of the encoder and the decoder according to Embodiment 1; FIG. 14 illustrates an example of a candidate list in Embodiment 1; FIG. 15 ill