CN-120186339-B - Processing method, processing apparatus, and storage medium
Abstract
The application provides a processing method, processing equipment and storage medium, wherein the processing method can be applied to the processing equipment and comprises the steps of determining or obtaining a target prediction mode of a current block according to a candidate mode in at least one mode list. The technical scheme of the application can match a proper target prediction mode for the current block, and further can support the improvement of the prediction accuracy of the current block.
Inventors
- LIU YUTIAN
- HUO YONGKAI
Assignees
- 深圳传音控股股份有限公司
Dates
- Publication Date
- 20260505
- Application Date
- 20250319
Claims (10)
- 1. A method of processing comprising the steps of: s10, determining or obtaining a target prediction mode of the current block according to the candidate mode in at least one mode list; the candidate mode is determined or obtained according to mode matching information related to at least one reference area of the current block and/or a sub-block of the current block; the pattern matching information includes at least one of: Determining or obtaining fifth matching information related to at least one reference area of the current block according to a gradient histogram, an area histogram and/or a prediction mode corresponding to a using frequency histogram of the encoded image block; Determining or obtaining sixth matching information related to at least one reference area of the sub-block of the current block according to a gradient histogram, an area histogram and/or a prediction mode corresponding to a usage frequency histogram of the sub-block of the current block; determining or obtaining seventh matching information related to at least one reference area of the current block according to a gradient histogram, an area histogram and/or a prediction mode corresponding to a using frequency histogram of the adjacent block of the current block; Determining or obtaining eighth matching information related to at least one reference area of the current block according to a gradient histogram, an area histogram and/or a prediction mode corresponding to a frequency histogram of use of the coded image block of a sub-block of a neighboring block of the current block; Determining or obtaining ninth matching information related to at least one reference area of the sub-block of the current block according to a gradient histogram, an area histogram and/or a prediction mode corresponding to a frequency histogram of use of the coded image block of the adjacent block of the sub-block of the current block; the gradient histogram is used for describing gradient amplitude distribution conditions of pixels in the reference areas of the current block and/or the sub-blocks of the current block in the corresponding directions of different prediction modes; the area histogram is used for describing the area amplitude distribution condition of at least one adjacent coding block and/or non-adjacent coding block of the current block and/or sub-block of the current block in the corresponding directions of different prediction modes; The histogram of the number of times of use of the coded image block is used to describe the frequency distribution of use of different prediction modes in the coded image block.
- 2. The processing method of claim 1, wherein the candidate patterns are further determined or derived based on at least one of: a sub-block, an adjacent block, a non-adjacent block, a co-located block and/or a prediction mode corresponding to an encoded image block at a preset position of the current block; A prediction mode corresponding to an adjacent block, a non-adjacent block, a co-located block and/or an encoded image block at a preset position of a sub-block of the current block; And the prediction modes corresponding to the adjacent blocks, the non-adjacent blocks, the co-located blocks and/or the sub-blocks of the coded image blocks at preset positions of the current block.
- 3. The processing method of claim 2, wherein the pattern matching information further comprises at least one of: Determining or obtaining first matching information related to at least one reference area of the current block according to a prediction mode corresponding to a sub-block, an adjacent block, a non-adjacent block, a co-located block and/or an encoded image block at a preset position of the current block; Determining or obtaining second matching information related to at least one reference area of the sub-block of the current block according to the prediction modes corresponding to the adjacent blocks, the non-adjacent blocks, the co-located blocks and/or the coded image blocks at preset positions of the sub-block of the current block; Determining or obtaining third matching information related to at least one reference area of the current block according to prediction modes corresponding to adjacent blocks, non-adjacent blocks, co-located blocks and/or sub-blocks of the coded image block at preset positions of the current block; Fourth matching information related to at least one reference area of the sub-block of the current block is determined or obtained according to prediction modes corresponding to adjacent blocks, non-adjacent blocks, co-located blocks and/or sub-blocks of the coded image block at preset positions of the sub-block of the current block.
- 4. The processing method according to claim 2, wherein determining or obtaining the candidate pattern based on pattern matching information associated with at least one reference region of the current block and/or pattern matching information associated with at least one reference region of a sub-block of the current block comprises at least one of: Determining or obtaining a candidate mode according to a mode ordering result associated with at least one mode matching information; Determining or obtaining a candidate mode according to a value range corresponding to at least one mode matching information; and determining or obtaining a candidate mode according to a comparison result between at least two values corresponding to the at least one mode matching information.
- 5. The processing method according to claim 4, wherein the pattern list comprises a first list and/or a second list, and/or the candidate patterns are determined or obtained according to a pattern ordering result associated with at least one pattern matching information, including at least one of: determining or obtaining candidate modes in a first list according to at least one first prediction mode in a preset sequencing position and/or a preset sequencing range in a mode sequencing result; determining or obtaining a candidate mode in a second list according to at least one second prediction mode positioned behind the first prediction mode in the mode sequencing result; determining or obtaining candidate modes in a second list according to at least one third prediction mode which is positioned at a preset sequencing position and/or outside a preset sequencing range in a mode sequencing result; And determining or obtaining the candidate modes in the second list according to at least one fourth prediction mode determined or obtained through the first prediction mode.
- 6. The processing method according to claim 2, wherein the determining or obtaining the reference area includes at least one of: Determining or obtaining at least one reference area according to at least one of an upper adjacent pixel, an upper non-adjacent pixel, a left non-adjacent pixel, an upper left adjacent pixel and an upper left non-adjacent pixel of the current block; Determining or obtaining at least one reference area according to at least one of a neighbor block, a non-neighbor block, a homonymy block, a time domain block and a default block corresponding to the current block; determining or obtaining at least one reference area according to at least one of the width, the height, the block size and the block area of the current block; Determining or obtaining at least one reference area according to the candidate motion vector of the current block or the candidate block determined or obtained by the candidate block vector; if the first information of the current block meets the first condition, the reference area is a first reference area; If the first information of the current block does not meet the first condition, the reference area is a second reference area; The first information includes at least one of flag information, syntax element, indication information, index about a list, aspect ratio of the current block, wide-high value range, and area value range of the current block; the first reference region and the second reference region are different; the first reference region includes at least one of: Determining or obtaining at least one reference area according to at least one of an upper adjacent pixel, an upper non-adjacent pixel, a left non-adjacent pixel, an upper left adjacent pixel and an upper left non-adjacent pixel of the current block; Determining or obtaining at least one reference area according to at least one of a neighbor block, a non-neighbor block, a homonymy block, a time domain block and a default block corresponding to the current block; determining or obtaining at least one reference area according to at least one of the width, the height, the block size and the block area of the current block; and determining or obtaining at least one reference area according to the candidate motion vector of the current block or the candidate block determined or obtained by the candidate block vector.
- 7. The processing method of claim 6, wherein the first information of the current block satisfies a first condition, comprising at least one of: The value of the first information is a first numerical value; the value of the first information is located in a first numerical interval; the value of the first information is larger than or equal to a first threshold value; the value of the first information is smaller than or equal to the second threshold value.
- 8. A method of processing according to claim 2 or 3, wherein the prediction mode comprises at least one of: An angle prediction mode; A non-angular prediction mode; Prediction derivation mode.
- 9. A processing device comprising a memory, a processor, the memory having stored thereon a processing program which, when executed by the processor, implements the steps of the processing method according to any of claims 1 to 8.
- 10. A storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the processing method according to any of claims 1 to 8.
Description
Processing method, processing apparatus, and storage medium Technical Field The present application relates to the field of image processing technologies, and in particular, to a processing method, a processing device, and a storage medium. Background The existing video coding standard (h.266/VVC) proposes a video frame coding technique, for example, when video frames are encoded and decoded, a protocol divides each frame into different blocks, and performs prediction processing and encoding and decoding processing. In the course of conception and implementation of the present application, the inventors found that at least the following problems exist in that it is difficult to match a suitable prediction mode for an image block during intra-prediction and/or inter-prediction, resulting in an undesirable prediction effect for the image block. The foregoing description is provided for general background information and does not necessarily constitute prior art. Disclosure of Invention In view of the above technical problems, the present application provides a processing method, a processing device, and a storage medium, which can match a suitable target prediction mode for a current block, so as to support improvement of prediction accuracy of the current block. The application provides a processing method, which can be applied to processing equipment and comprises the following steps: S10, determining or obtaining a target prediction mode of the current block according to the candidate mode in the at least one mode list. Optionally, the candidate pattern is determined or derived from at least one of: pattern matching information related to at least one reference region of the current block; Pattern matching information related to at least one reference region of a sub-block of the current block; a sub-block, an adjacent block, a non-adjacent block, a co-located block and/or a prediction mode corresponding to an encoded image block at a preset position of the current block; A prediction mode corresponding to an adjacent block, a non-adjacent block, a co-located block and/or an encoded image block at a preset position of a sub-block of the current block; And the prediction modes corresponding to the adjacent blocks, the non-adjacent blocks, the co-located blocks and/or the sub-blocks of the coded image blocks at preset positions of the current block. Optionally, the pattern matching information includes at least one of: Determining or obtaining first matching information related to at least one reference area of the current block according to a prediction mode corresponding to a sub-block, an adjacent block, a non-adjacent block, a co-located block and/or an encoded image block at a preset position of the current block; Determining or obtaining second matching information related to at least one reference area of the sub-block of the current block according to the prediction modes corresponding to the adjacent blocks, the non-adjacent blocks, the co-located blocks and/or the coded image blocks at preset positions of the sub-block of the current block; Determining or obtaining third matching information related to at least one reference area of the current block according to prediction modes corresponding to adjacent blocks, non-adjacent blocks, co-located blocks and/or sub-blocks of the coded image block at preset positions of the current block; Determining or obtaining fourth matching information related to at least one reference area of a sub-block of the current block according to a prediction mode corresponding to a neighboring block, a non-neighboring block, a co-located block and/or a sub-block of the coded image block of a preset position of the sub-block of the current block; Determining or obtaining fifth matching information related to at least one reference area of the current block according to a gradient histogram, an area histogram and/or a prediction mode corresponding to a using frequency histogram of the encoded image block; Determining or obtaining sixth matching information related to at least one reference area of the sub-block of the current block according to a gradient histogram, an area histogram and/or a prediction mode corresponding to a usage frequency histogram of the sub-block of the current block; determining or obtaining seventh matching information related to at least one reference area of the current block according to a gradient histogram, an area histogram and/or a prediction mode corresponding to a using frequency histogram of the adjacent block of the current block; Determining or obtaining eighth matching information related to at least one reference area of the current block according to a gradient histogram, an area histogram and/or a prediction mode corresponding to a frequency histogram of use of the coded image block of a sub-block of a neighboring block of the current block; and determining or obtaining ninth matching information related to at least one reference ar