WO-2026092573-A1 - METHODS AND APPARATUS OF OVERLAPPED BLOCK REFINEMENT FOR INTRA PREDICTION IN VIDEO CODING
Abstract
A method and apparatus of video coding using refined boundary prediction for intra coded blocks are disclosed. According to this method, input data comprising a current block, a current subblock, a neighbouring block, or a neighbouring subblock is received. A current intra predictor is generated for the current block or the current subblock. A refined intra predictor is generated for the current block or the current subblock in a boundary area of the current block or the current subblock by blending the current intra predictor and a target neighbouring predictor derived from the neighbouring block or the neighbouring subblock. The current block or the current subblock is encoded or decoded using the refined intra predictor.
Inventors
- LIN, YU-CHENG
- TSAI, CHIA-MING
- CHUANG, TZU-DER
- HSU, CHIH-WEI
- CHEN, CHING-YEH
- CHEN, YI-WEN
- HUANG, YU-WEN
Assignees
- MEDIATEK INC.
Dates
- Publication Date
- 20260507
- Application Date
- 20251030
- Priority Date
- 20241030
Claims (17)
- A method of video coding, the method comprising: receiving input data comprising a current block, a current subblock, a neighbouring block, or a neighbouring subblock; generating a current intra predictor for the current block or the current subblock; generating a refined intra predictor for the current block or the current subblock in a boundary area of the current block or the current subblock by blending the current intra predictor and a target neighbouring predictor derived from the neighbouring block or the neighbouring subblock; and encoding or decoding the current block or the current subblock using the refined intra predictor.
- The method of Claim 1, wherein the current block or the current subblock is coded in regular intra prediction, DIMD (Decoder-side Intra Mode Derivation) , TIMD (Template-based Intra Mode Derivation) , MIP (Matrix-based Intra Prediction) , matrix-based intra prediction replacing conventional intra mode.
- The method of Claim 1, wherein the refined intra predictor is used for encoding or decoding the current block or the current subblock when the neighbouring block or the neighbouring subblock is coded in IBC (Intra Block Copy) mode or IntraTMP (Intra Template Matching Prediction) mode.
- The method of Claim 1, wherein the refined intra predictor is used for encoding or decoding the current block or the current subblock when the neighbouring block or the neighbouring subblock is coded in inter prediction mode.
- The method of Claim 1, wherein the refined intra predictor is used for encoding or decoding the current block or the current subblock when the neighbouring block or the neighbouring subblock has a motion vector, a block vector, or a motion shift from a template-related prediction mode.
- The method of Claim 1, wherein whether the refined intra predictor is used for encoding or decoding the current block is according to a smallest neighbouring prediction mode unit.
- The method of Claim 6, wherein the smallest neighbouring prediction mode unit corresponds to an intra 4x4 unit, inter 4x4 unit, IBC 4x4 unit or IntraTMP 4x4 unit.
- The method of Claim 1, wherein whether the refined intra predictor is used for encoding or decoding is according to a smallest current prediction mode unit.
- The method of Claim 8, wherein the smallest current prediction mode unit corresponds to an intra 4x4 unit, inter 4x4 unit, IBC 4x4 unit or IntraTMP 4x4 unit.
- The method of Claim 1, wherein when the current block or the current subblock is coded in intra prediction mode, and the neighbouring block or the neighbouring subblock is coded in an intra prediction mode, a decoder-side intra mode derivation method is used to derive the intra prediction mode for the neighbouring block or the neighbouring subblock and to generate the target neighbouring predictor.
- The method of Claim 1, wherein when the current block or the current subblock is coded in an intra prediction mode, and the neighbouring block or the neighbouring subblock is also coded in the intra prediction mode, a Matrix-based Intra Prediction Replacing Conventional Intra Modes method is used to derive the target neighbouring predictor.
- The method of Claim 1, wherein when the current block or the current subblock is coded in an intra prediction mode, and the neighbouring block or the neighbouring subblock is also coded in the intra prediction mode, a TIMD (Template-based Intra Mode Derivation) method is used to derive the target neighbouring predictor.
- The method of Claim 1, wherein when the current block or the current subblock is coded in an intra prediction mode, and the neighbouring block or the neighbouring subblock is also coded in the intra prediction mode, an intra fusion method method is used to derive the target neighbouring predictor.
- The method of Claim 1, wherein when the current block or the current subblock is coded in intra prediction mode, and an intra-coded block, an inter coded block, an IBC coded block, or an IntraTMP coded block exists in neighbouring blocks, the refined intra predictor is used for encoding or decoding the current block or the current subblock.
- The method of Claim 1, wherein when the current block or the current subblock is coded in intra prediction mode, more neighbouring block positions or more neighbouring subblock positions along the current block are checked for neighbouring intra coded blocks, neighbouring IBC coded blocks, or neighbouring IntraTMP coded blocks to determine whether the refined intra predictor is used for encoding or decoding the current block.
- The method of Claim 1, wherein when luma mapping and chroma scaling is used, a neighbouring intra predictor is generated for the neighbouring block or the neighbouring subblock, and the generated neighbouring intra predictor is converted from a reshaped domain and then blended with the current intra predictor in an original domain.
- An apparatus for video coding, the apparatus comprising one or more electronics or processors arranged to: receive input data comprising a current block, a current subblock, a neighbouring block, or a neighbouring subblock; generate a current intra predictor for the current block or the current subblock; generate a refined intra predictor for the current block or the current subblock in a boundary area of the current block or the current subblock by blending the current intra predictor and a target neighbouring predictor derived from the neighbouring block or the neighbouring subblock; and encode or decode the current block or the current subblock using the refined intra predictor.
Description
METHODS AND APPARATUS OF OVERLAPPED BLOCK REFINEMENT FOR INTRA PREDICTION IN VIDEO CODING CROSS REFERENCE TO RELATED APPLICATIONS The present invention is a non-Provisional Application of and claims priority to U.S. Provisional Patent Application No. 63/713,611, filed on October 30, 2024. The U.S. Provisional Patent Application is hereby incorporated by reference in its entirety. FIELD OF THE INVENTION The present invention relates to video coding system using Overlapped Block Motion Compensation (OBMC) . In particular, the present invention relates to apply overlapped block prediction for intra coded blocks. BACKGROUND AND RELATED ART Versatile video coding (VVC) is the latest international video coding standard developed by the Joint Video Experts Team (JVET) of the ITU-T Video Coding Experts Group (VCEG) and the ISO/IEC Moving Picture Experts Group (MPEG) . The standard has been published as an ISO standard: ISO/IEC 23090-3: 2021, Information technology -Coded representation of immersive media -Part 3: Versatile video coding, published Feb. 2021. VVC is developed based on its predecessor HEVC (High Efficiency Video Coding) by adding more coding tools to improve coding efficiency and also to handle various types of video sources including 3-dimensional (3D) video signals. Fig. 1A illustrates an exemplary adaptive Inter/Intra video encoding system incorporating loop processing. For Intra Prediction 110, the prediction data is derived based on previously coded video data in the current picture. For Inter Prediction 112, Motion Estimation (ME) is performed at the encoder side and Motion Compensation (MC) is performed based on the result of ME to provide prediction data derived from other picture (s) and motion data. Switch 114 selects Intra Prediction 110 or Inter Prediction 112 and the selected prediction data is supplied to Adder 116 to form prediction errors, also called residues. The prediction error is then processed by Transform (T) 118 followed by Quantization (Q) 120. The transformed and quantized residues are then coded by Entropy Encoder 122 to be included in a video bitstream corresponding to the compressed video data. The bitstream associated with the transform coefficients is then packed with side information such as motion and coding modes associated with Intra prediction and Inter prediction, and other information such as parameters associated with loop filters applied to underlying image area. The side information associated with Intra Prediction 110, Inter prediction 112 and in-loop filter 130, is provided to Entropy Encoder 122 as shown in Fig. 1A. When an Inter-prediction mode is used, a reference picture or pictures have to be reconstructed at the encoder end as well. Consequently, the transformed and quantized residues are processed by Inverse Quantization (IQ) 124 and Inverse Transformation (IT) 126 to recover the residues. The residues are then added back to prediction data 136 at Reconstruction (REC) 128 to reconstruct video data. The reconstructed video data may be stored in Reference Picture Buffer 134 and used for prediction of other frames. As shown in Fig. 1A, incoming video data undergoes a series of processing in the encoding system. The reconstructed video data from REC 128 may be subject to various impairments due to a series of processing. Accordingly, in-loop filter 130 is often applied to the reconstructed video data before the reconstructed video data are stored in the Reference Picture Buffer 134 in order to improve video quality. For example, deblocking filter (DF) , Sample Adaptive Offset (SAO) and Adaptive Loop Filter (ALF) may be used. The loop filter information may need to be incorporated in the bitstream so that a decoder can properly recover the required information. Therefore, loop filter information is also provided to Entropy Encoder 122 for incorporation into the bitstream. In Fig. 1A, Loop filter 130 is applied to the reconstructed video before the reconstructed samples are stored in the reference picture buffer 134. The system in Fig. 1A is intended to illustrate an exemplary structure of a typical video encoder. It may correspond to the High Efficiency Video Coding (HEVC) system, VP8, VP9, H. 264 or VVC. The decoder, as shown in Fig. 1B, can use similar or portion of the same functional blocks as the encoder except for Transform 118 and Quantization 120 since the decoder only needs Inverse Quantization 124 and Inverse Transform 126. Instead of Entropy Encoder 122, the decoder uses an Entropy Decoder 140 to decode the video bitstream into quantized transform coefficients and needed coding information (e.g. ILPF information, Intra prediction information and Inter prediction information) . The Intra prediction 150 at the decoder side does not need to perform the mode search. Instead, the decoder only needs to generate Intra prediction according to Intra prediction information received from the Entropy Decoder 140. Furthermore, for Inter prediction, the decode