US-12627820-B2 - Decoding method and electronic device
Abstract
Embodiments of this application provide a decoding method and an electronic device. The method includes: obtaining a bitstream that includes intermediate data encoded by a second device; decoding the bitstream to obtain the intermediate data; and performing data form conversion, including domain conversion, on the intermediate data to obtain probe data. The probe data corresponds to one or more probes in a three-dimensional scene and is for determining a shading effect of an object in the three-dimensional scene in a rendering process.
Inventors
- Zehui Lin
- Kangying Cai
- Rong Wei
- Yunneng Mo
Assignees
- HUAWEI TECHNOLOGIES CO., LTD.
Dates
- Publication Date
- 20260512
- Application Date
- 20240910
- Priority Date
- 20220315
Claims (18)
- 1 . A decoding method applied to a first device, wherein the method comprises: obtaining a bitstream comprising intermediate data; decoding the bitstream to obtain the intermediate data; and performing data form conversion comprising domain conversion on the intermediate data to obtain probe data that corresponds to one or more probes in a three-dimensional scene, wherein the probe data is for determining a shading effect of an object in the three-dimensional scene in a rendering process, wherein the performing data form conversion further comprises: performing first processing on the intermediate data to obtain converted data; and performing second processing on the converted data to obtain the probe data, wherein, in association with the first processing being the domain conversion, the second processing comprising at least one of a dequantization or a first manner rearrangement; and in association with the second processing being the domain conversion, the first processing comprising at least one of the dequantization or the first manner rearrangement.
- 2 . The method of claim 1 , wherein the performing data form conversion on the intermediate data to obtain probe data further comprises: after the second processing is performed on the converted data and before the probe data is obtained, performing third processing on the converted data obtained through second processing, wherein the third processing comprises at least one of the following: the domain conversion; the dequantization; or the first manner rearrangement.
- 3 . The method of claim 1 , wherein the intermediate data is data on a YUV plane, and before the performing first processing on the intermediate data, the method further comprises: performing a second manner rearrangement on the intermediate data, wherein the second manner rearrangement comprises extracting the data from the YUV plane.
- 4 . The method of claim 3 , wherein the second manner rearrangement further comprises arranging the data extracted from the YUV plane into a two-dimensional picture.
- 5 . The method of claim 1 , wherein when the intermediate data is intermediate data corresponding to illumination data, the first manner rearrangement comprises at least one of the following: adding a channel to the intermediate data corresponding to the illumination data; converting the intermediate data corresponding to the illumination data into a data storage format of the first device; or performing a dimension conversion on the intermediate data corresponding to the illumination data.
- 6 . The method of claim 1 , wherein, when the intermediate data is intermediate data corresponding to visibility data comprising a plurality of groups of channels, the first manner rearrangement comprises at least one of the following: combining the plurality of groups of channels; converting the intermediate data corresponding to the visibility data into a data storage format of the first device; or performing dimension conversion on the intermediate data corresponding to the visibility data.
- 7 . The method of claim 1 , wherein the performing the data form conversion on the intermediate data to obtain probe data further comprises: performing the data form conversion on the intermediate data based on first attribute data obtained by decoding the bitstream to obtain the probe data.
- 8 . The method of claim 1 , wherein the domain conversion comprises at least one of the following: conversion from a normalized domain to a non-normalized domain; conversion from a non-linear domain to a linear domain; conversion from a YUV domain to an RGB domain; conversion from an XYZ domain to an RGB domain; or conversion from a Lab domain to an RGB domain.
- 9 . The method of claim 1 , wherein the bitstream further comprises attribute data of the probe, wherein the attribute data comprises at least one of first attribute data for the data form conversion or second attribute data used in the rendering process.
- 10 . The method of claim 1 , wherein when the intermediate data comprises the intermediate data corresponding to illumination data and the intermediate data corresponding to visibility data, the bitstream further comprises bitstream structure information comprising at least one of a location of the intermediate data corresponding to the illumination data or a location of the intermediate data corresponding to the visibility data.
- 11 . A first device, comprising: at least one processor; and a memory storing instructions that, when executed by the at least one processor, cause the first device to: obtain a bitstream comprising intermediate data; decode the bitstream, to obtain the intermediate data; and perform data form conversion comprising domain conversion on the intermediate data to obtain probe data that corresponds to one or more probes in a three-dimensional scene, wherein the probe data is for determining a shading effect of an object in the three-dimensional scene in a rendering process, wherein the first device is further to: perform first processing on the intermediate data to obtain converted data; perform second processing on the converted data to obtain the probe data, wherein in association with the first processing being the domain conversion, the second processing comprising at least one of a dequantization or a first manner rearrangement; and in association with the second processing being the domain conversion, the first processing comprising at least one of the dequantization or the first manner rearrangement.
- 12 . The first device of claim 11 , wherein the instructions, when executed by the at least one processor, cause the first device to: perform third processing on the converted data obtained through second processing after the second processing is performed on the converted data and before the probe data is obtained, wherein the third processing comprises at least one of the following: the domain conversion; the dequantization; or the first manner rearrangement.
- 13 . The first device of claim 11 , wherein the intermediate data is data on a YUV plane, and the instructions, when executed by the at least one processor, cause the first device to: perform a second manner rearrangement on the intermediate data before first processing is performed on the intermediate data, wherein the second manner rearrangement comprises extracting the data from the YUV plane.
- 14 . The first device of claim 11 , wherein when the intermediate data is intermediate data corresponding to illumination data, the first manner rearrangement comprises at least one of the following: adding a channel to the intermediate data corresponding to the illumination data; converting the intermediate data corresponding to the illumination data into a data storage format of the first device; or performing a dimension conversion on the intermediate data corresponding to the illumination data.
- 15 . The first device of claim 11 , wherein when the intermediate data is intermediate data corresponding to visibility data comprising a plurality of groups of channels, the first manner rearrangement comprises at least one of the following: combining the plurality of groups of channels; converting the intermediate data corresponding to the visibility data into a data storage format of the first device; or performing dimension conversion on the intermediate data corresponding to the visibility data.
- 16 . The first device of claim 11 , wherein the instructions, when executed by the at least one processor, cause the first device to: perform the data form conversion on the intermediate data based on first attribute data obtained by decoding the bitstream to obtain the probe data.
- 17 . The first device of claim 11 , wherein the bitstream further comprises attribute data of the probe, wherein the attribute data comprises at least one of first attribute data for the data form conversion or second attribute data used in the rendering process.
- 18 . A non-transitory computer-readable storage medium, storing a computer program, and when the computer program is executed by at least one processor, cause the at least one processor to: obtain a bitstream comprising intermediate data; decode the bitstream, to obtain the intermediate data; and perform data form conversion on the intermediate data, to obtain probe data, wherein the probe data corresponds to one or more probes in a three-dimensional scene, and the probe data is for determining shading effect of an object in the three-dimensional scene in a rendering process, wherein the data form conversion comprises domain conversion, wherein the at least one processor is further to: perform first processing on the intermediate data to obtain converted data; perform second processing on the converted data to obtain the probe data, wherein in association with the first processing being the domain conversion, the second processing comprising at least one of a dequantization or a first manner rearrangement; and in association with the second processing being the domain conversion, the first processing comprising at least one of the dequantization or the first manner rearrangement.
Description
CROSS-REFERENCE TO RELATED APPLICATIONS This application is a continuation of International Application No. PCT/CN2023/080096, filed on Mar. 7, 2023, which claims priority to Chinese Patent Application No. 202210255747.6, filed on Mar. 15, 2022. The disclosures of the aforementioned applications are hereby incorporated by reference in their entireties. TECHNICAL FIELD Embodiments of this application relate to the encoding/decoding field, and in particular, to a decoding method and an electronic device. BACKGROUND As people gradually impose high requirements on quality of rendered images, a method of simulating shading effect in a rendering process gradually transits from simulation of shading effect of direct illumination (that is, simulating shading effect of a light ray reflected for one time) to simulation of shading effect of indirect illumination (that is, simulating shading effect of a light ray reflected for a plurality of times), to make an image more vivid. A probe is a manner of simulating shading effect of direct illumination. Currently, in a device-cloud synergy scene, a cloud generates probe data, compresses the probe data, and sends the compressed probe data to a device side. After receiving a bitstream, the device side decompresses the bitstream, to obtain the probe data, and then computes, in a rendering process based on the probe data obtained through decoding, indirect shading effect generated by a light ray reflected by an object in a 3D (three-dimensional) scene. SUMMARY This application provides a decoding method and an electronic device. Compared with the conventional technology, the method in cooperation with a corresponding encoding method can reduce a bit rate under same rendering effect, or improve rendering effect under a same bit rate. According to a first aspect, this application provides a decoding method. The decoding method includes: obtaining a bitstream, where the bitstream includes intermediate data encoded by a second device; decoding the bitstream, to obtain the intermediate data; and performing data form conversion on the intermediate data, to obtain probe data. The probe data corresponds to one or more probes in a three-dimensional scene, and the probe data is for determining shading effect of an object in the three-dimensional scene in a rendering process. The object in the three-dimensional scene corresponds to a three-dimensional model in the three-dimensional scene, the three-dimensional model may include a model of an object or a model of a person, and the data form conversion includes domain conversion. The second device performs data form conversion on the probe data, to achieve a lower bit rate under same rendering effect or better rendering effect under a same bit rate in comparison with the conventional technology. Therefore, the probe data is converted into a more compact representation form for compression, or a quantity of bits occupied by data of higher importance is increased in the bitstream in the rendering process for compression. In this case, a decoder of a first device can restore the probe data from the intermediate data through data form conversion, to subsequently determine the shading effect of the object in the three-dimensional scene in the rendering process based on the probe data. Therefore, in comparison with the conventional technology, when rendering effect at decoder sides is the same, a rendering delay at the decoder side in the method in this application is shorter; or when rendering delays of the decoder side are the same, rendering effect at the decoder side in the method in this application is better. Better rendering effect may mean that an image of a rendered picture enjoys a more accurate illumination color, more vivid brightness, less light leaks, and the like. For example, the decoding method in this application may be applied to an N (N is an integer greater than 1)—end synergy rendering scene, for example, scenes such as a cloud game, a cloud exhibition, indoor decoration, a clothing design, and an architecture design. This is not limited in this application. The second device may be a server or a terminal, and the first device may be a terminal. For example, the decoding method in this application is applied to a device-cloud synergy rendering scene. The second device is a server, and the first device is a terminal, for example, a terminal device like a personal computer, a mobile phone, or a VR (virtual reality) wearable device. For example, the domain conversion may be converting a representation form of data from one domain to another domain. Domains may be classified from different perspectives based on requirements. For example, from a perspective of whether normalization is performed, the domains may be classified into a normalized domain and a non-normalized domain. From a perspective of a color space, the domains may be classified into an RGB domain, a YUV domain, an XYZ domain, and a Lab domain. From a perspective