KR-102962820-B1 - Simplified parameter derivation for intra-prediction
Abstract
A video processing method is provided. The method comprises the steps of: determining parameters of a cross-component linear model that are fully determinable by two chroma samples and two corresponding luminance samples for a transformation between a current video block of video, which is a chroma block, and a coded representation of the video; and performing a transformation based on the determining step.
Inventors
- 창, 카이
- 창, 리
- 리우, 홍빈
- 슈, 지청
- 왕, 유에
Assignees
- 두인 비전 컴퍼니 리미티드
- 바이트댄스 아이엔씨
Dates
- Publication Date
- 20260511
- Application Date
- 20191106
- Priority Date
- 20181106
Claims (16)
- In video data processing methods, A step of determining parameters of a cross-component linear model determinable by two chroma values and two luminance values for a conversion between a current video block of a video, which is a chroma block, and a coded representation of said video; and Step of performing the above transformation based on the above determining step Includes, The two chroma values are denoted as C0 and C1, the two lumina values are denoted as L0 and L1, and the parameters of the cross-component linear model are derived based on the difference between C1 and C0 and the difference between L1 and L0. The predicted sample of the current video block is derived based on the reconstructed sample of the luminance block corresponding to the current video block and the parameters of the cross-component linear model, and If L1 is equal to L0, the predicted sample of the current video block is based on C0, and not based on L0, L1, and C1, method.
- In paragraph 1, If L1 is equal to L0, the prediction sample of the current video block is set to C0, method.
- In paragraph 1, If L1 is equal to L0, an inner prediction mode other than the above cross-component linear model mode is used, method.
- In paragraph 1 or 2, The parameters of the above cross-component linear model are derived by excluding the partitioning operation, method.
- In paragraph 1 or 2, The parameters of the above cross-component linear model are derived using operations without lookup tables, and said operations exclude division operations, method.
- In paragraph 1 or 2, The parameters of the above cross-component linear model are derived using algebraic operations, method.
- In paragraph 1 or 2, The parameters of the above cross-component linear model are derived based on the value of Floor(Log2(L1-L0)), and Floor(x) is a floor function that outputs the integer part of x, method.
- In paragraph 1, The above conversion includes the step of encoding the current video block into a bitstream, method.
- In paragraph 1, The above conversion includes the step of decoding the current video block from the bitstream, method.
- In a video data processing device comprising a processor and a non-transient memory having instructions, When the above command is executed by the processor, Allow the above processor: For the conversion between the current video block of the video, which is a chroma block, and the bitstream of the video, determine the parameters of a cross-component linear model determined by two chroma values and two luminance values; and Perform the above transformation based on the above decision; The two chroma values are denoted as C0 and C1, the two lumina values are denoted as L0 and L1, and the parameters of the cross-component linear model are derived based on the difference between C1 and C0 and the difference between L1 and L0; The predicted sample of the current video block is derived based on the reconstructed sample of the luminance block corresponding to the current video block and the parameters of the cross-component linear model, and If L1 is equal to L0, the predicted sample of the current video block is based on C0, and not based on L0, L1, and C1, device.
- In a non-transient computer-readable storage medium that stores instructions for a processor to perform the following, For the conversion between the current video block of the video, which is a chroma block, and the bitstream of the video, determine the parameters of a cross-component linear model determined by two chroma values and two luminance values; and Perform the above transformation based on the above decision; The two chroma values are denoted as C0 and C1, the two lumina values are denoted as L0 and L1, and the parameters of the cross-component linear model are derived based on the difference between C1 and C0 and the difference between L1 and L0; The predicted sample of the current video block is derived based on the reconstructed sample of the luminance block corresponding to the current video block and the parameters of the cross-component linear model, and If L1 is equal to L0, the predicted sample of the current video block is based on C0, and not based on L0, L1, and C1, Non-transient computer-readable storage media.
- In a non-transient computer-readable recording medium storing a bitstream of a video generated by a method performed by a video processing device, The above method is: A step of determining the parameters of a cross-component linear model determined by two chroma values and two luminance values; and It includes a step of generating the bitstream based on the above-determined step; The two chroma values are denoted as C0 and C1, the two lumina values are denoted as L0 and L1, and the parameters of the cross-component linear model are derived based on the difference between C1 and C0 and the difference between L1 and L0; The predicted sample of the current video block is derived based on the reconstructed sample of the luminance block corresponding to the current video block and the parameters of the cross-component linear model, and If L1 is equal to L0, the predicted sample of the current video block is based on C0, and not based on L0, L1, and C1, Non-transient computer-readable storage media.
- delete
- delete
- delete
- delete
Description
Simplified parameter derivation for intra-prediction This patent document relates to video processing technology, devices, and systems. This application is the national phase of international application number PCT/CN2019/116028 filed on November 6, 2019, and international patent application number PCT/CN2018/114158 filed on November 6, 2018, international patent application number PCT/CN2018/118799 filed on December 1, 2018, international patent application number PCT/CN2018/119709 filed on December 7, 2018, international patent application number PCT/CN2018/125412 filed on December 29, 2018, international patent application number PCT/CN2019/070002 filed on January 1, 2019, and international patent application number Claims priority and benefit to PCT/CN2019/075874, International Patent Application No. PCT/CN2019/075993 filed on February 24, 2019, International Patent Application No. PCT/CN2019/076195 filed on February 26, 2019, International Patent Application No. PCT/CN2019/079396 filed on March 24, 2019, International Patent Application No. PCT/CN2019/079431 filed on March 25, 2019, and International Patent Application No. PCT/CN2019/079769 filed on March 26, 2019. The full disclosure of the aforementioned applications is incorporated by reference as part of the disclosure of this application. Despite advancements in video compression, digital video still accounts for the largest share of bandwidth usage on the Internet and other digital communication networks. As the number of connected user devices capable of receiving and displaying video increases, the demand for bandwidth for digital video usage is expected to continue rising. Figure 1 shows an example of sample locations used to derive weights for a linear model used for cross-component prediction. Figure 2 shows an example of classifying neighboring samples into two groups. Figure 3a shows an example of a chroma sample and a corresponding lumina sample. Figure 3b shows an example of down-filtering for a cross-component linear model (CCLM) in a Joint Exploration Model (JEM). Figures 4a and 4b show only the upper neighbor samples and left neighbor samples used for prediction based on a linear model, respectively. Figure 5 shows an example of a straight line between the minimum and maximum luminance values as a function of the corresponding chroma samples. Figure 6 shows an example of the current chroma block and its neighbor samples. Figure 7 shows examples of different parts of a chroma block predicted by a linear model using only left neighbor samples (LM-L) and a linear model using only neighbor samples (LM-A). Figure 8 shows an example of the top-left neighbor block. Figure 9 shows an example of a sample used to derive a linear model. Figure 10 shows an example of the left and bottom left columns and the top and top right rows for the current block. Figure 11 shows an example of the current block and its reference sample. Figure 12 shows an example of two neighbor samples when both left and top neighbor reference samples are available. Figure 13 shows an example of two neighbor samples when only the upper neighbor reference sample is available. Figure 14 shows an example of two neighbor samples when only the left neighbor reference sample is available. Figure 15 shows an example of four neighbor samples when both the left and top neighbor reference samples are available. Figure 16 shows an example of a lookup table used for LM induction. Figure 17 shows an example of an LM parameter derivation process with six-four entries. FIG. 18 shows a flowchart of an exemplary method for video processing based on a partial implementation of the disclosed technology. Figures 19a and 19b show a flowchart of an exemplary method for video processing based on a partial implementation of the disclosed technology. Figures 20a and 20b show flowcharts of other exemplary methods for video processing based on some implementations of the disclosed technology. Figure 21 shows a flowchart of another exemplary method for video processing based on a partial implementation of the disclosed technology. FIG. 22 shows a flowchart of an exemplary method for video processing based on a partial implementation of the disclosed technology. Figures 23a and 23b show flowcharts of an exemplary method for video processing based on a partial implementation of the disclosed technology. FIGS. 24a-24e show a flowchart of an exemplary method for video processing based on a partial implementation of the disclosed technology. FIGS. 25a and 25b show flowcharts of an exemplary method for video processing based on a partial implementation of the disclosed technology. Figures 26a and 26b show flowcharts of an exemplary method for video processing based on a partial implementation of the disclosed technology. Figures 27a and 27b show a flowchart of an exemplary method for video processing based on a partial implementation of the disclosed technology. FIGS. 28a through 28c show flowcharts of an e