Search

CN-122029806-A - Image or video processing method, encoder, decoder and encoding/decoding system

CN122029806ACN 122029806 ACN122029806 ACN 122029806ACN-122029806-A

Abstract

The invention discloses an image or video processing method, an encoder, a decoder and a coding and decoding system, wherein the method comprises the steps of obtaining a first candidate parameter list, wherein the first candidate parameter list is a candidate parameter list of a cross-component prediction fusion mode CCPMERGE, the first candidate parameter list comprises a plurality of candidate parameters, at least one of the plurality of candidate parameters is provided with a parameter of a prediction enhancement filter (PBF), and predicting chroma information of a current block in an image according to the selected candidate parameter to generate a chroma prediction block. The invention can simultaneously maintain the advantages of CCPMERGE and the predictive enhanced filter PBF, and improve the coding and decoding performance.

Inventors

  • QIN HONGDONG
  • DING KEQIN
  • XU ZHUOWEI

Assignees

  • 深圳TCL新技术有限公司

Dates

Publication Date
20260512
Application Date
20231006

Claims (20)

  1. An image or video processing method, wherein the method comprises: Obtaining a first candidate parameter list, wherein the first candidate parameter list is a candidate parameter list of a cross-component prediction fusion mode CCPMERGE, the first candidate parameter list comprises a plurality of candidate parameters, at least one of the plurality of candidate parameters is provided with a parameter of a prediction enhancement filter (PBF), and The chroma information of the current block in one image is predicted according to the selected candidate parameter to generate a chroma prediction block.
  2. The method of claim 1, wherein the method further comprises: Re-ordering the candidate parameters in the first candidate parameter list according to a certain rule to obtain a second candidate parameter list, Wherein the step of generating a chroma prediction block comprises: The chroma prediction block is generated according to a candidate parameter selected from the second candidate parameter list.
  3. The method according to claim 1 or2, wherein the method further comprises: Truncating the first candidate parameter list or the second candidate parameter list to obtain a third candidate parameter list; Wherein the step of generating a chroma prediction block comprises: The chroma prediction block is generated according to a candidate parameter selected from the third candidate parameter list.
  4. The method of claim 1, wherein the method further comprises: Re-ordering the candidate parameters in the first candidate parameter list according to a certain rule, and truncating the re-ordered candidate parameter list, or And truncating the first candidate parameter list, and reordering the truncated candidate parameter list according to a certain rule.
  5. The method of claim 1, wherein the first candidate parameter list includes identification information of prediction enhancement filtering, the identification information being used to identify whether one or more candidate parameters in the first candidate parameter list carry parameters of or support the prediction enhancement filtering.
  6. The method of claim 1, wherein the method further comprises: determining whether a candidate parameter in the first candidate parameter list is with parameters of or supports the prediction enhancement filtering, wherein the step of determining is based on at least one of: Coding modes of the candidate parameters; Coding cost of the candidate parameter after the cross-component prediction fusion mode is combined with the prediction enhancement filtering, and The statistical characteristics of the block are predicted.
  7. The method of claim 6, wherein if the encoding mode of the candidate parameter is a multi-model cross-component linear model MM-CCLM or multi-model convolutional cross-component model MM-CCCM, indicating that it is required to determine whether the candidate parameter supports the prediction enhancement filtering, otherwise, it is not required to determine whether the candidate parameter supports the prediction enhancement filtering.
  8. The method of claim 6, wherein the candidate parameters do not use the prediction enhancement filtering when the coding cost of the candidate parameters is less than one or more thresholds, and/or wherein different prediction enhancement filtering is introduced when the coding cost of the candidate parameters is in different threshold intervals.
  9. The method of claim 8, wherein the encoding cost is calculated based on a sum of absolute differences between the result of the prediction enhancement filtering and the actual reconstructed chroma image reference region that is added to the attempted reconstructed chroma image reference region derived using the candidate parameters.
  10. The method of claim 6, wherein the candidate parameters do not use the prediction enhancement filtering when the statistical characteristics of the prediction block are less than one or more thresholds, and/or introduce different prediction enhancement filtering when the statistical characteristics of the prediction block are at different threshold intervals.
  11. The method of claim 2, wherein in reordering candidate parameters in the first candidate parameter list, the candidate parameters are uniformly ordered according to coding costs corresponding to candidate parameters that do not use the prediction enhancement filtering and to candidate parameters that use the prediction enhancement filtering.
  12. The method according to claim 2, wherein in reordering candidate parameters in the first candidate parameter list, the candidate parameters are divided into different candidate parameter sets, different ordering rules are selected for the different candidate parameter sets, and finally the candidate parameters of the candidate parameter sets are combined and ordered.
  13. A method according to claim 3, wherein in truncating the first candidate parameter list or the second candidate parameter list, it is determined whether to put a candidate parameter in the first candidate parameter list or the second candidate parameter list into the third candidate parameter list based on a coding cost threshold.
  14. A method according to claim 3, wherein in truncating the first candidate parameter list or the second candidate parameter list, it is determined whether to put a candidate parameter in the first candidate parameter list or the second candidate parameter list into the third candidate parameter list based on a parameter list length.
  15. The method of claim 1, wherein the selected candidate parameters are with parameters of the prediction enhancement filter PBF, generating a chroma prediction result based on the selected candidate parameters, and using the prediction enhancement filter on the chroma prediction junction to generate the chroma prediction block.
  16. The method of claim 1, wherein the selected candidate parameters are with parameters of the prediction enhancement filter PBF, the prediction enhancement filter is applied to luma blocks, and calculations are performed based on the luma blocks to which the prediction enhancement filter is applied and the selected candidate parameters to generate the chroma prediction blocks.
  17. An encoder comprising a processor for executing instructions to implement the method of any of claims 1 to 16.
  18. A decoder comprising a processor for executing instructions to implement the method of any of claims 1 to 16.
  19. A codec system comprising encoding means comprising a first processor for executing instructions to implement the method of any one of claims 1 to 16 and decoding means comprising a second processor for executing instructions to implement the method of any one of claims 1 to 16.
  20. An image or video processing method, wherein the method comprises: Acquiring statistical information of a current prediction block; updating a filter threshold based on the statistical information and a previously determined filter threshold; Selecting a prediction enhancement filter based on the updated filter threshold, and The image is reconstructed using the selected prediction enhancement filter.

Description

Image or video processing method, encoder, decoder and encoding/decoding system Technical Field The present invention relates to the field of image processing, and in particular, to an image or video processing method, an encoder, a decoder, and a codec system. Background In image and video compression, an image or a frame of video is typically composed of three color components, a luminance component (Y) and two chrominance components (Cb and Cr). Each component is represented as a data matrix. The data matrix for each component is partitioned into blocks associated with particular coding parameters. For any block on a particular component, it may be a square or rectangle with a side length to the power of 2, and there is a unique block corresponding to it in the other two components, whose spatial positions are the same. The encoder operates in a specific coding order, processing the luma component and then the chroma component, starting from the upper left corner and proceeding in a left-to-right, top-to-bottom order. In a video coding standard such as VERSATILE VIDEO CODING (VVC), intra prediction refers to a method of predicting a current block using a block already coded within the same picture. When intra-predicting a current block, the encoder may attempt one by one from among a plurality of preset intra-prediction modes, generate prediction blocks using reconstructed values in previously encoded blocks, and then compare the prediction blocks to select an optimal prediction mode. The difference (simply referred to as residual) between the current block and the best predicted block is also encoded. The decoder can reconstruct the same reconstructed image or video as the encoder would have had the signaling and residuals of the best prediction mode been transmitted. In encoding the chrominance components, the luminance component in the same image has been encoded, and the reconstructed luminance block may be used as a reference sample for encoding the chrominance components. Accordingly, when a current block of a chrominance component is encoded, the current block of the chrominance component may be predicted by analyzing a relationship of a luminance component and a chrominance component in a nearby encoded block and applying the relationship to the current block of the luminance component. This method is commonly referred to as Cross-component prediction (CCP). However, the existing CCP methods still suffer from a number of drawbacks, and improvements are needed. Disclosure of Invention The invention provides an image or video processing method, which comprises the steps of obtaining a first candidate parameter list, wherein the first candidate parameter list is a candidate parameter list crossing a component prediction fusion mode CCPMERGE, the first candidate parameter list comprises a plurality of candidate parameters, at least one of the plurality of candidate parameters is provided with a parameter of a prediction enhancement filter (PBF), and predicting chroma information of a current block in an image according to the selected candidate parameter to generate a chroma prediction block. Optionally, the method further comprises: Re-ordering the candidate parameters in the first candidate parameter list according to a certain rule to obtain a second candidate parameter list, Wherein the step of generating a chroma prediction block comprises: The chroma prediction block is generated according to a candidate parameter selected from the second candidate parameter list. Optionally, the method further comprises: Truncating the first candidate parameter list or the second candidate parameter list to obtain a third candidate parameter list; Wherein the step of generating a chroma prediction block comprises: The chroma prediction block is generated according to a candidate parameter selected from the third candidate parameter list. Optionally, the method further comprises: Re-ordering the candidate parameters in the first candidate parameter list according to a certain rule, and truncating the re-ordered candidate parameter list, or And truncating the first candidate parameter list, and reordering the truncated candidate parameter list according to a certain rule. Optionally, the first candidate parameter list includes identification information of prediction enhancement filtering, where the identification information is used to identify whether one or more candidate parameters in the first candidate parameter list carry parameters of the prediction enhancement filtering or support the prediction enhancement filtering. Optionally, the method further comprises: determining whether a candidate parameter in the first candidate parameter list is with parameters of or supports the prediction enhancement filtering, wherein the step of determining is based on at least one of: Coding modes of the candidate parameters; Coding cost of the candidate parameter after the cross-component prediction fusion mode is comb