CN-122029809-A - Method and device for inheriting video coding and decoding local illumination compensation model
Abstract
The invention discloses a video encoding and decoding method and device. According to the method, a first flag is determined to indicate whether to apply a Local Illumination Compensation (LIC) process to the candidate. The second flag is determined to indicate whether the first flag is correct. According to the first flag and the second flag, codec information including LIC prediction generated by applying LIC processing to the target candidate is used to encode or decode the current block. According to the method, an explicit LIC flag is issued at the encoder side or parsed at the decoder side for a current block encoded and decoded in a bi-directional matching AMVP merge mode. Based on the explicit LIC flag, codec information is used to encode or decode the current block, wherein the codec information includes LIC predictions generated by applying LIC processing to the selected merge candidates and/or the selected AMVP candidates.
Inventors
- LUO ZHIXUAN
- LAI ZHENYAN
- CAI JIAMING
- ZHUANG ZHENGYAN
- ZHUANG ZIDE
- CHEN QINGYE
- XU ZHIWEI
- CHEN QIWEN
Assignees
- 联发科技股份有限公司
Dates
- Publication Date
- 20260512
- Application Date
- 20241014
- Priority Date
- 20231016
Claims (14)
- 1. A video encoding and decoding method, the method comprising: receiving input data associated with a current block, wherein the input data comprises pixel data to be encoded at an encoder side or data associated with the current block to be decoded at a decoder side; determining a first flag to indicate whether to apply a local illumination compensation process to the candidate; Determining a second flag to indicate whether the first flag is correct, and The current block is encoded or decoded using information including a codec including a local illumination compensation prediction generated by applying the local illumination compensation process to a target candidate according to the first flag and the second flag.
- 2. The video codec method of claim 1, wherein when the second flag is true, the local illumination compensation process is applied if the first flag is true, and the local illumination compensation process is not applied if the first flag is false.
- 3. The video codec method of claim 1, wherein when the second flag is false, the local illumination compensation process is not applied if the first flag is true, and the local illumination compensation process is not applied if the first flag is false.
- 4. The video codec method of claim 1, wherein the second flag is encoded by one or more context codec symbols.
- 5. The video codec method of claim 4, wherein the second flag is encoded using one or more context variables.
- 6. The video codec method of claim 5, wherein the selection of the one or more context variables depends on whether the local illumination compensation process is on or off for one or more neighboring blocks.
- 7. A video codec apparatus, the apparatus comprising one or more electronic devices or processors configured to: receiving input data associated with a current block, wherein the input data comprises pixel data to be encoded at an encoder side or data associated with the current block to be decoded at a decoder side; determining a first flag to indicate whether to apply a local illumination compensation process to the candidate; Determining a second flag to indicate whether the first flag is correct, and The current block is encoded or decoded using codec information based on the first flag and the second flag, wherein the codec information includes a local illumination compensation prediction generated by applying the local illumination compensation process to a target candidate.
- 8. A video encoding and decoding method, the method comprising: Receiving input data associated with a current block, wherein the input data comprises pixel data to be encoded at an encoder side or material associated with the current block to be decoded at a decoder side, wherein the current block is encoded and decoded in a bi-directionally matched advanced motion vector prediction merge mode; transmitting explicit local illumination compensation flag at the encoder side or parsing the explicit local illumination compensation flag at the decoder side, and The current block is encoded or decoded by using codec information including local illumination compensation predictions generated by applying local illumination compensation processing to selected merge candidates and/or selected bi-directional matching advanced motion vector prediction candidates associated with the bi-directional matching advanced motion vector prediction merge mode according to the explicit local illumination compensation flag.
- 9. The video codec method of claim 8, wherein when the local illumination compensation flag inherited from the selected merge candidate is true, the local illumination compensation process is applied if the explicit local illumination compensation flag is set to true, and the local illumination compensation process is not applied if the explicit local illumination compensation flag is set to false.
- 10. The video codec method of claim 8, wherein when the local illumination compensation flag inherited from the selected merge candidate is false, the local illumination compensation process is applied if the explicit local illumination compensation flag is set to false, and the local illumination compensation process is not applied if the explicit local illumination compensation flag is set to true.
- 11. The video codec method of claim 8, wherein the explicit local illumination compensation flag is encoded by one or more context codec symbols.
- 12. The video codec method of claim 8, wherein the explicit local illumination compensation flag is encoded by using one or more context variables.
- 13. The video codec method of claim 12, wherein the selection of the one or more context variables depends on whether the local illumination compensation process is on or off for one or more neighboring blocks.
- 14. A video codec apparatus, the apparatus comprising one or more electronic devices or processors configured to: Receiving input data associated with a current block, wherein the input data comprises pixel data to be encoded at an encoder side or material associated with the current block to be decoded at a decoder side, wherein the current block is encoded and decoded in a bi-directionally matched advanced motion vector prediction merge mode; transmitting explicit local illumination compensation flag at the encoder side or parsing the explicit local illumination compensation flag at the decoder side, and The current block is encoded or decoded by using codec information including a local illumination compensation prediction generated by applying a local illumination compensation process to the selected merge candidate and/or the selected bi-directional matching advanced motion vector prediction candidate according to the explicit local illumination compensation flag.
Description
Method and device for inheriting video coding and decoding local illumination compensation model [ Cross-reference ] The present invention claims priority from U.S. provisional patent application No. 63/590,481 filed on day 10, month 16 of 2023 and U.S. provisional patent application No. 63/590,789 filed on day 10, month 17 of 2023. The above U.S. provisional patent application is incorporated by reference herein in its entirety. [ Field of technology ] The present invention relates to video encoding and decoding systems. In particular, the present invention relates to transmitting LIC flags to video codec systems containing LIC codec tools. [ Background Art ] Multifunctional video codec (VERSATILE VIDEO CODING, VVC for short) is the latest international video codec standard developed by the international telecommunication union-telecommunication standardization sector (International Telecommunication Union Telecommunication Standardization Sector, ITU-T for short) video codec expert group (Video Coding Experts Group, VCEG for short) and the international organization for standardization/international electrotechnical commission (International Organization For Standardization/International Electrotechnical Commission, ISO/IEC) moving picture expert group (Moving Picture Experts Group, MPEG for short) joint video expert group (Joint Video Experts Team, JVET). The standard has been published as an ISO standard in month 2 of 2021, ISO/IEC 23090-3:2021, information technology-codec representation of immersive media-section 3-multifunctional video codec. VVC was developed on the basis of its previous High Efficiency Video Coding (HEVC), which improves Coding efficiency by adding more Coding tools, and processes various types of Video sources including three-dimensional (three dimensional, 3D) Video signals. Fig. 1A illustrates an exemplary adaptive inter/intra video coding system that includes loop processing. For intra prediction, prediction data is derived based on previously encoded picture video data in the current picture. For inter prediction 112, motion estimation (Motion Estimation, ME) is performed at the encoder side and motion compensation (Motion Compensation, MC) is performed based on the results of ME to provide prediction data derived from other pictures and motion data. The switch 114 selects either the intra prediction 110 or the inter prediction 112 and provides the selected prediction data to the adder 116 to form a prediction error, also referred to as a residual. The prediction error is then transformed 118, followed by Quantization 120. The transformed and quantized residual is then encoded by entropy encoder 122 for inclusion in a video bitstream corresponding to the compressed video data. The bitstream associated with the transform coefficients is then packed along with side information (motion and codec modes associated with intra and inter prediction, etc.) and other information about parameters associated with loop filters applied to the underlying image region. As shown in fig. 1A, auxiliary information related to intra prediction 110, inter prediction 112, and loop filter 130 is provided to entropy encoder 122. When inter prediction modes are used, one or more reference pictures must also be reconstructed at the encoder side. Thus, the transformed and quantized residual is processed by inverse quantization (Inverse Quantization, IQ) 124 and inverse transformation (Inverse Transformation, IT) 126 to recover the residual. The residual is then added back to the prediction data 136, reconstructing the video data at Reconstruction (REC) 128. The reconstructed video data may be stored in a reference picture buffer 134 and used for prediction of other frames. As shown in fig. 1A, the incoming video data undergoes a series of processes in the encoding system. The reconstructed video data from the REC 128 may suffer from various impairments due to a series of processing. Therefore, loop filter 130 is typically applied to the reconstructed video data to improve video quality before storing the reconstructed video data in reference picture buffer 134. For example, a deblocking filter (Deblocking Filter, DF), a sample adaptive Offset (SAMPLE ADAPTIVE Offset, SAO) and an adaptive loop filter (Adaptive Loop Filter, ALF) may be used. It may be necessary to incorporate loop filter information into the bitstream so that the decoder can correctly recover the required information. Thus, loop filter information is also provided to the entropy encoder 122 to incorporate the bitstream. In fig. 1A, a loop filter 130 is applied to the reconstructed video and then the reconstructed samples are stored in a reference picture buffer 134. The system in fig. 1A is intended to illustrate an exemplary architecture of a typical video encoder. It may correspond to a High Efficiency Video Coding (HEVC) system, VP8, VP9, h.264 or VVC. As shown in fig. 1B, the decoder may use the same or partially the same functional bl