EP-4742673-A1 - VIDEO ENCODING METHOD AND APPARATUS, VIDEO DECODING METHOD AND APPARATUS, DEVICE, SYSTEM, AND STORAGE MEDIUM
Abstract
The present application provides a video encoding method and apparatus, a video decoding method and apparatus, a device, a system, and a storage medium. When a current chroma block is predicted by using a cross-component prediction mode, cross-component prediction model parameters and a filtering identifier of the current chroma block are determined, the filtering identifier of the current chroma block being determined on the basis of a filtering identifier of an encoded/decoded image block around the current chroma block; on the basis of the cross-component prediction model parameters, cross-component prediction is performed on the current chroma block to obtain a first prediction value of the current chroma block; and on the basis of the filtering identifier and the first prediction value of the current chroma block, a second prediction value of the current chroma block is determined. That is, according to the present application, by filtering a chroma prediction value obtained by means of the cross-component prediction mode, the accuracy of cross-component prediction is improved, and the encoding/decoding performance is improved. In addition, the filtering identifier of the current chroma block inherits the filtering identifier of the encoded/decoded block, and no additional bit consumption needs to be introduced to indicate the filtering identifier, thereby saving codewords and improving the encoding efficiency.
Inventors
- HUANG, Hang
Assignees
- GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., LTD.
Dates
- Publication Date
- 20260513
- Application Date
- 20230703
Claims (20)
- A video decoding method, comprising: in response to determining that a prediction mode for a current chroma block is a cross-component prediction mode, determining a cross-component prediction model parameter and a filter flag of the current chroma block, wherein the cross-component prediction model parameter is a model parameter used for predicting the current chroma block using the cross-component prediction mode, the filter flag of the current chroma block is determined based on a filter flag of a decoded picture block around the current chroma block, and a filter flag is used to indicate whether to perform filtering on a prediction value obtained based on the cross-component prediction mode; performing cross-component prediction on the current chroma block based on the cross-component prediction model parameter, to obtain a first prediction value of the current chroma block; and determining a second prediction value of the current chroma block based on the first prediction value and the filter flag of the current chroma block.
- The method according to claim 1, wherein determining the cross-component prediction model parameter and the filter flag of the current chroma block comprises: determining the cross-component prediction model parameter and the filter flag of the current chroma block based on cross-component prediction information of the decoded picture block around the current chroma block, wherein the cross-component prediction information comprises a cross-component prediction model parameter and the filter flag of the decoded picture block.
- The method according to claim 2, wherein determining the cross-component prediction model parameter and the filter flag of the current chroma block based on the cross-component prediction information of the decoded picture block around the current chroma block comprises: constructing a first candidate list based on the cross-component prediction information of the decoded picture block around the current chroma block, wherein the first candidate list comprises a plurality of candidate cross-component prediction information sets, and each cross-component prediction information set comprises a candidate cross-component prediction model parameter set and a candidate filter flag; selecting first cross-component prediction information from the plurality of candidate cross-component prediction information sets comprised in the first candidate list; and determining the cross-component prediction model parameter and the filter flag of the current chroma block based on the first cross-component prediction information.
- The method according to claim 3, wherein the decoded picture block comprises a neighboring block of the current chroma block in spatial domain, and constructing the first candidate list based on the cross-component prediction information of the decoded picture block comprises: adding cross-component prediction information of the neighboring block of the current chroma block in spatial domain to the first candidate list according to a first preset order.
- The method according to claim 4, wherein in response to that a number of pieces of cross-component prediction information comprised in the first candidate list is less than a first preset number after adding the cross-component prediction information of the neighboring block of the current chroma block in spatial domain to the first candidate list, the method further comprises: adding cross-component prediction information of a non-neighboring block of the current chroma block in spatial domain to the first candidate list according to a second preset order.
- The method according to claim 5, wherein in response to that a number of pieces of cross-component prediction information comprised in the first candidate list is less than the first preset number after adding the cross-component prediction information of the neighboring block and the non-neighboring block of the current chroma block in spatial domain to the first candidate list, the method further comprises: adding most recently used cross-component prediction information to the first candidate list.
- The method according to claim 6, wherein in response to that a number of pieces of cross-component prediction information comprised in the first candidate list is less than the first preset number after adding the most recently used cross-component prediction information to the first candidate list, the method further comprises: acquiring at least one default cross-component prediction model parameter set; for an i-th cross-component prediction model parameter set in the at least one cross-component prediction model parameter set, determining a filter flag corresponding to the i-th cross-component prediction model parameter set, and combining the i-th cross-component prediction model parameter set and the filter flag corresponding to the i-th cross-component prediction model parameter set into an i-th cross-component prediction information set, wherein i is a positive integer; and adding at least one determined cross-component prediction information set to the first candidate list.
- The method according to claim 7, wherein determining the filter flag corresponding to the i-th cross-component prediction model parameter set comprises: determining a default value as a value of the filter flag corresponding to the i-th cross-component prediction model parameter set, the default value being a first value or a second value, wherein the first value indicates to perform filtering on the prediction value obtained from the cross-component prediction mode, and the second value indicates to skip filtering on the prediction value obtained from the cross-component prediction mode.
- The method according to claim 3, wherein selecting the first cross-component prediction information from the first candidate list comprises: decoding a bitstream to obtain an index of the first cross-component prediction information; and selecting the first cross-component prediction information from the first candidate list based on the index of the first cross-component prediction information.
- The method according to claim 3, wherein determining the cross-component prediction model parameter and the filter flag of the current chroma block based on the first cross-component prediction information comprises: determining a cross-component prediction model parameter comprised in the first cross-component prediction information as the cross-component prediction model parameter of the current chroma block; and determining a filter flag comprised in the first cross-component prediction information as the filter flag of the current chroma block.
- The method according to claim 3, wherein the cross-component prediction information of the decoded picture block is stored in a first storage space of the decoded picture block, and constructing the first candidate list based on the cross-component prediction information of the decoded picture block around the current chroma block comprises: acquiring the cross-component prediction information of the decoded picture block from the first storage space of the decoded picture block; and constructing the first candidate list based on the cross-component prediction information of the decoded picture block.
- The method according to claim 3, wherein after determining the cross-component prediction model parameter and the filter flag of the current chroma block, the method further comprises: storing the cross-component prediction model parameter and the filter flag of the current chroma block in a first storage space of the current chroma block.
- The method according to claim 1, wherein determining the cross-component prediction model parameter and the filter flag of the current chroma block comprises: determining the cross-component prediction model parameter of the current chroma block; and determining the filter flag of the current chroma block based on the filter flag of the decoded picture block around the current chroma block.
- The method according to claim 13, wherein determining the filter flag of the current chroma block based on the filter flag of the decoded picture block around the current chroma block comprises: constructing a second candidate list based on the filter flag of the decoded picture block around the current chroma block, the second candidate list comprising a plurality of candidate filter flags; and determining a candidate filter flag selected from the second candidate list as the filter flag of the current chroma block.
- The method according to claim 14, wherein the decoded picture block comprises a neighboring block of the current chroma block in spatial domain, and constructing the second candidate list based on the filter flag of the decoded picture block around the current chroma block comprises: adding a filter flag of the neighboring block of the current chroma block in spatial domain to the second candidate list according to a third preset order.
- The method according to claim 15, wherein in response to that a number of filter flags comprised in the second candidate list is less than a second preset number upon adding the filter flag of the neighboring block of the current chroma block in spatial domain to the second candidate list, the method further comprises: adding a filter flag of a non-neighboring block of the current chroma block in spatial domain to the second candidate list according to a fourth preset order.
- The method according to claim 16, wherein in response to that the number of filter flags comprised in the second candidate list is less than the second preset number after adding the filter flags of the neighboring block and the non-neighboring block of the current chroma block in spatial domain to the second candidate list, the method further comprises: adding a most recently used filter flag to the second candidate list.
- The method according to claim 17, wherein in response to that the number of filter flags comprised in the second candidate list is less than the second preset number after adding the most recently used filter flag to the second candidate list, the method further comprises: adding a default filter flag to the second candidate list.
- The method according to claim 14, wherein determining the candidate filter flag selected from the second candidate list as the filter flag of the current chroma block comprises: decoding a bitstream to obtain a filter flag index; and determining the filter flag of the current chroma block from the plurality of candidate filter flags comprised in the second candidate list based on the filter flag index.
- The method according to claim 1, wherein the cross-component prediction mode comprises a cross-component prediction merge mode or a chroma fusion derivation mode.
Description
TECHNICAL FILED The present disclosure relates to the technical field of video encoding and decoding, and in particular, to a video encoding method and apparatus, a video decoding method and apparatus, a device, a system, and a storage medium. BACKGROUND Digital video technology is capable of being incorporated into a variety of video apparatuses, such as a digital television, a smartphone, a computer, an e-reader, or a video player. With the development of the video technology, data volume included in video data is relatively large. In order to facilitate transmission of the video data, the video device implements the video compression technology to make the video data can be transmitted or stored more effectively. Since there is a temporal or spatial redundancy in the video, the redundancy in the video may be eliminated or reduced through prediction, and the compression efficiency is thus improved. Cross-component prediction can predict a chroma component based on a reconstructed value of a luma component. However, the prediction effect for chroma prediction values obtained from certain cross-component prediction modes is not ideal. SUMMARY Embodiments of the present disclosure provide a video encoding method and apparatus, a video decoding method and apparatus, a device, a system, and a storage medium. Though filtering a chroma prediction value obtained from cross-component prediction, the cross-component prediction effect for a chroma is improved, and the filter flag of the current chroma block inherits the filter flag of the reconstructed block without introducing additional bit consumption to indicate the filter flag, thereby improving encoding efficiency. In a first aspect, the present disclosure provides a video decoding method, applied to a decoder, and includes: in response to determining that a prediction mode for a current chroma block is a cross-component prediction mode, determining a cross-component prediction model parameter and a filter flag of the current chroma block, where the cross-component prediction model parameter is a model parameter used for predicting the current chroma block using the cross-component prediction mode, the filter flag of the current chroma block is determined based on a filter flag of a decoded picture block around the current chroma block, and the filter flag is used to indicate whether to perform filtering on a prediction value obtained based on the cross-component prediction mode;performing cross-component prediction on the current chroma block based on the cross-component prediction model parameter, to obtain a first prediction value of the current chroma block; anddetermining a second prediction value of the current chroma block based on the first prediction value and the filter flag of the current chroma block. In a second aspect, the embodiments of the present disclosure provide a video encoding method, applied to an encoder, and includes: in response to determining that a prediction mode for a current chroma block is a cross-component prediction mode, determining a cross-component prediction model parameter and a filter flag of the current chroma block, where the cross-component prediction model parameter is a model parameter used for predicting the current chroma block using the cross-component prediction mode, the filter flag of the current chroma block is determined based on a filter flag of an encoded picture block around the current chroma block, and a filter flag is used to indicate whether to perform filtering on a prediction value obtained based on the cross-component prediction mode;performing cross-component prediction on the current chroma block based on the cross-component prediction model parameter, to obtain a first prediction value of the current chroma block; anddetermining a second prediction value of the current chroma block based on the first prediction value and the filter flag of the current chroma block. In a third aspect, the present disclosure provides a video decoding apparatus, which is configured to perform the method in the above-mentioned first aspect or various implementations thereof. Specifically, the apparatus includes a functional unit configured to perform the method in the above-mentioned first aspect or various implementations thereof. In a fourth aspect, the present disclosure provides a video encoding apparatus, configured to perform the method in the above-mentioned second aspect or various implementations thereof. Specifically, the apparatus includes a functional unit configured to perform the method in the above-mentioned second aspect or various implementations thereof. In a fifth aspect, the present disclosure provides a video decoder, which includes a processor and a memory. The memory is configured to store a computer program, and the processor is configured to invoke and execute the computer program stored in the memory to implement the method in the above-mentioned first aspect or various implementations ther