US-20260129189-A1 - ENCODING METHOD, DECODING METHOD, ENCODERS, DECODERS, BITSTREAM AND STORAGE MEDIUM
Abstract
Disclosed is a decoding method. The decoding method is applied to a decoder and includes: searching for a first collocated node of a current node in a first reference picture and searching for a second collocated node of the current node in a second reference picture according to geometry information of the current node in a current picture; and performing inter attribute prediction on the current node according to the first collocated node and the second collocated node, to obtain an attribute prediction value of the current node.
Inventors
- Zexing SUN
Assignees
- GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., LTD.
Dates
- Publication Date
- 20260507
- Application Date
- 20260105
Claims (20)
- 1 . A decoding method, applied to a decoder, comprising: searching for a first collocated node of a current node in a first reference picture and searching for a second collocated node of the current node in a second reference picture according to geometry information of the current node in a current picture; and performing inter attribute prediction on the current node according to the first collocated node and the second collocated node, to obtain an attribute prediction value of the current node.
- 2 . The method according to claim 1 , wherein performing inter attribute prediction on the current node according to the first collocated node and the second collocated node, to obtain the attribute prediction value of the current node comprises: in response to the first collocated node being present in the first reference picture and the second collocated node being present in the second reference picture, determining the attribute prediction value of the current node according to an attribute reconstructed value of the first collocated node and an attribute reconstructed value of the second collocated node.
- 3 . The method according to claim 1 , wherein performing inter attribute prediction on the current node according to the first collocated node and the second collocated node, to obtain the attribute prediction value of the current node comprises: in response to the first collocated node being present in the first reference picture and the second collocated node being present in the second reference picture, determining a first difference number between a number of occupied child nodes of the first collocated node and a number of occupied child nodes of the current node according to occupancy information of the first collocated node and occupancy information of the current node; determining a second difference number between a number of occupied child nodes of the second collocated node and the number of occupied child nodes of the current node according to occupancy information of the second collocated node and the occupancy information of the current node; and determining the attribute prediction value of the current node according to a relationship between the first difference number and the second difference number.
- 4 . The method according to claim 3 , wherein determining the attribute prediction value of the current node according to the relationship between the first difference number and the second difference number comprises: in response to the first difference number being equal to the second difference number, determining the attribute prediction value of the current node according to an attribute reconstructed value of the first collocated node and an attribute reconstructed value of the second collocated node.
- 5 . The method according to claim 3 , wherein determining the attribute prediction value of the current node according to the relationship between the first difference number and the second difference number comprises: in response to the first difference number being smaller than the second difference number, determining the attribute prediction value of the current node according to an attribute reconstructed value of the first collocated node.
- 6 . The method according to claim 5 , wherein in response to the first difference number being smaller than the second difference number, the attribute prediction value of the current node is equal to the attribute reconstructed value of the first collocated node.
- 7 . The method according to claim 3 , wherein determining the attribute prediction value of the current node according to the relationship between the first difference number and the second difference number comprises: in response to the first difference number being greater than the second difference number, determining the attribute prediction value of the current node according to an attribute reconstructed value of the second collocated node.
- 8 . The method according to claim 7 , wherein in response to the first difference number being greater than the second difference number, the attribute prediction value of the current node is equal to the attribute reconstructed value of the second collocated node.
- 9 . The method according to claim 1 , wherein performing inter attribute prediction on the current node according to the first collocated node and the second collocated node, to obtain the attribute prediction value of the current node comprises: in response to the first collocated node being not present in the first reference picture and the second collocated node being present in the second reference picture, determining the attribute prediction value of the current node according to an attribute reconstructed value of the second collocated node.
- 10 . The method according to claim 9 , wherein in response to the first collocated node being not present in the first reference picture and the second collocated node being present in the second reference picture, the attribute prediction value of the current node is equal to the attribute reconstructed value of the second collocated node.
- 11 . The method according to claim 1 , wherein performing inter attribute prediction on the current node according to the first collocated node and the second collocated node, to obtain the attribute prediction value of the current node comprises: in response to the first collocated node being present in the first reference picture and the second collocated node being not present in the second reference picture, determining the attribute prediction value of the current node according to an attribute reconstructed value of the first collocated node.
- 12 . The method according to claim 11 , wherein in response to the first collocated node being present in the first reference picture and the second collocated node being not present in the second reference picture, the attribute prediction value of the current node is equal to the attribute reconstructed value of the first collocated node.
- 13 . The method according to claim 1 , wherein performing inter attribute prediction on the current node according to the first collocated node and the second collocated node, to obtain the attribute prediction value of the current node comprises: in response to the first collocated node being not present in the first reference picture and the second collocated node being not present in the second reference picture, determining the attribute prediction value of the current node according to an attribute reconstructed value of at least one neighborhood node of the current node in the current picture.
- 14 . The method according to claim 2 , wherein determining the attribute prediction value of the current node according to the attribute reconstructed value of the first collocated node and the attribute reconstructed value of the second collocated node comprises: performing a weighting operation on the attribute reconstructed value of the first collocated node and the attribute reconstructed value of the second collocated node according to a first weighting coefficient of the attribute reconstructed value of the first collocated node and a second weighting coefficient of the attribute reconstructed value of the second collocated node, to obtain the attribute prediction value of the current node.
- 15 . The method according to claim 14 , further comprising: determining the first weighting coefficient according to an interval between an acquisition time of the current picture and an acquisition time of the first reference picture.
- 16 . The method according to claim 14 , further comprising: determining the second weighting coefficient according to an interval between an acquisition time of the current picture and an acquisition time of the second reference picture.
- 17 . The method according to claim 14 , further comprising: parsing a bitstream to obtain the first weighting coefficient and the second weighting coefficient.
- 18 . The method according to claim 1 , wherein the attribute prediction value of the current node is a prediction value of an alternating current (AC) coefficient of the current node; and the method further comprises: parsing a bitstream to obtain a residual value of the AC coefficient of the current node; determining a reconstructed value of the AC coefficient of the current node according to the residual value of the AC coefficient of the current node and the prediction value of the AC coefficient of the current node; and performing an inverse region adaptive hierarchal transform (RAHT) on the reconstructed value of the AC coefficient of the current node, to obtain the attribute reconstructed value of the current node.
- 19 . An encoding method, applied to an encoder, comprising: searching for a first collocated node of a current node in a first reference picture and searching for a second collocated node of the current node in a second reference picture according to geometry information of the current node in a current picture; and performing inter attribute prediction on the current node according to the first collocated node and the second collocated node, to obtain an attribute prediction value of the current node.
- 20 . A non-transitory computer-readable storage medium, having a computer program and a bitstream stored thereon, wherein the computer program, when executed by a processor, causes the processor to perform following steps of the encoding method to generate the bitstream: searching for a first collocated node of a current node in a first reference picture and searching for a second collocated node of the current node in a second reference picture according to geometry information of the current node in a current picture; and performing inter attribute prediction on the current node according to the first collocated node and the second collocated node, to obtain an attribute prediction value of the current node.
Description
CROSS-REFERENCE TO RELATED APPLICATION This application is a Continuation application of International Application No. PCT/CN2023/106650 filed on Jul. 10, 2023, which is incorporated herein by reference in its entirety. TECHNICAL FIELD Embodiments of the present application relate to the technical field of point cloud compression, and in particular, to an encoding method, a decoding method, an encoder, a decoder, a bitstream and a storage medium. BACKGROUND In a Geometry-based Point Cloud Compression (G-PCC) encoding and decoding framework or a Video-based Point Cloud Compression (V-PCC) encoding and decoding framework provided by the Moving Picture Experts Group (MPEG), geometry information and attribute information of a point cloud are coded separately. At present, attribute information coding mainly focuses on color information coding. In the processor of color information coding, there are two main transform manners: one is a distance-based lifting transform relying on Level of Detail (LOD) partitioning, and the other is a directly performed Region Adaptive Hierarchical Transform (RAHT). However, in related schemes of attribute RAHT inter predictive coding, there is still a need to further improve the accuracy of inter attribute prediction. SUMMARY The embodiments of the present application provide an encoding method, a decoding method, an encoder, a decoder, a bitstream and a storage medium. The technical solutions of the embodiments of the present application may be implemented as follows. In a first aspect, the embodiments of the present application provide a decoding method, which is applied to a decoder. The method includes: searching for a first collocated node of a current node in a first reference picture and searching for a second collocated node of the current node in a second reference picture according to geometry information of the current node in a current picture; and performing inter attribute prediction on the current node according to the first collocated node and the second collocated node, to obtain an attribute prediction value of the current node. In a second aspect, the embodiments of the present application provide an encoding method, which is applied to an encoder. The method includes: searching for a first collocated node of a current node in a first reference picture and searching for a second collocated node of the current node in a second reference picture according to geometry information of the current node in a current picture; and performing inter attribute prediction on the current node according to the first collocated node and the second collocated node, to obtain an attribute prediction value of the current node. In a third aspect, the embodiments of the present application provide a decoder. The decoder includes: a first searching module, configured to search for a first collocated node of a current node in a first reference picture and search for a second collocated node of the current node in a second reference picture according to geometry information of the current node in a current picture; and a first predicting module, configured to perform inter attribute prediction on the current node according to the first collocated node and the second collocated node, to obtain an attribute prediction value of the current node. In a fourth aspect, the embodiments of the present application provide a decoder. The decoder includes a first memory and a first processor; where the first memory is configured to store a computer program executable on the first processor; and the first processor is configured to, when running the computer program, perform the decoding method as described in the embodiments of the present application. In a fifth aspect, the embodiments of the present application provide an encoder. The encoder includes: a second searching module, configured to search for a first collocated node of a current node in a first reference picture and search for a second collocated node of the current node in a second reference picture according to geometry information of the current node in a current picture; and a second predicting module, configured to perform inter attribute prediction on the current node according to the first collocated node and the second collocated node, to obtain an attribute prediction value of the current node. In a sixth aspect, the embodiments of the present application provide an encoder. The encoder includes a second memory and a second processor; where the second memory is configured to store a computer program executable on the second processor; and the second processor is configured to, when running the computer program, perform the encoding method as described in the embodiments of the present application. In a seventh aspect, the embodiments of the present application provide a bitstream, where the bitstream is obtained by adopting the encoding method described in the embodiments of the present application. In an eighth aspect, the e