Search

KR-102963747-B1 - 3D model rendering method and device, electronic device, and storage medium

KR102963747B1KR 102963747 B1KR102963747 B1KR 102963747B1KR-102963747-B1

Abstract

The present application discloses a method and apparatus for rendering a 3D (3-dimensional) model, an electronic device, and a storage medium, and belongs to the field of image processing technology. The 3D model rendering method comprises the steps of: acquiring a cross-sectional dataset—the cross-sectional dataset comprises cross-sectional data constructed based on a submodel of the 3D model and used to indicate a cross-section of an envelope box of at least one submodel—; acquiring a texture map corresponding to the cross-sectional data within the cross-sectional dataset—the texture map corresponding to the cross-sectional data is determined according to the texture data of the submodel corresponding to the cross-sectional data—; and rendering the 3D model based on the cross-sectional data within the cross-sectional dataset and the corresponding texture map. Since the texture map represents texture data obtained by projecting the texture data of the submodel corresponding to the cross-sectional data onto the cross-section indicated by the cross-sectional data, the texture map can reflect the texture and shape of the submodel, thereby enhancing the rendering effect during rendering based on the cross-sectional data and the corresponding texture map.

Inventors

  • 샤오 웨웨이
  • 후 이신
  • 잔 진자오

Assignees

  • 텐센트 테크놀로지(센젠) 컴퍼니 리미티드

Dates

Publication Date
20260511
Application Date
20230601
Priority Date
20220819

Claims (20)

  1. A 3D (3-Dimension) model rendering method performed by a terminal device, A step of acquiring a cross-section dataset - said cross-section dataset is constructed based on a plurality of submodels included in a 3D model of an object, said plurality of submodels each represent a component of said object, said texture data of said plurality of submodels represents texture information of said component, said cross-section dataset includes a plurality of cross-section data, said plurality of cross-section data each is used to indicate a cross-section of an envelope box of one or more submodels -; A step of acquiring a texture map corresponding to the plurality of cross-sectional data within the cross-sectional data set above—wherein any one of the texture maps is used to represent texture data obtained by projecting texture data of a submodel corresponding to the corresponding cross-sectional data onto a cross-section indicated by the corresponding cross-sectional data, and the corresponding cross-sectional data is a cross-sectional data among the plurality of cross-sectional data and corresponds to the texture map—; and A step of rendering the 3D model based on the plurality of cross-sectional data within the cross-sectional dataset and the corresponding texture maps. Includes, The above-mentioned cross-sectional dataset includes two or more levels of datasets, and any level of the above-mentioned dataset includes one or more cross-sectional data, and one cross-sectional data of the previous level corresponds to one or more cross-sectional data of the next level; Before the step of acquiring the cross-sectional dataset, the 3D model rendering method is, A step of determining a previous level dataset based on a next level dataset—the next level dataset is determined based on cross-sectional data corresponding to the plurality of submodels in the 3D model in response to the next level being the lowest level among the two or more levels—; and A step of obtaining the cross-sectional dataset based on the two or more levels of datasets in response to satisfying the setting conditions. 3D model rendering method including more.
  2. In paragraph 1, The above 3D model rendering method is, Step of determining a basic submodel in the above 3D model - the basic submodel is any one of the plurality of submodels in the above 3D model -; A step of determining transformation information for a transformation from the above basic submodel to another submodel - the above other submodel is any one of the above plurality of submodels that is not the above basic submodel in the above 3D model -; A step of determining cross-sectional data corresponding to the above basic submodel; and A step of determining cross-sectional data corresponding to another submodel based on cross-sectional data corresponding to the basic submodel and the transformation information. A 3D model rendering method that further includes
  3. In paragraph 2, The step of determining the basic submodel in the above 3D model is, A step of classifying the plurality of submodels in the 3D model into one or more categories according to the texture data of the plurality of submodels in the 3D model; and For any one of the above one or more categories, the method includes the step of selecting any submodel from among a plurality of submodels of the category as the basic submodel of the category. The step of determining transformation information for the transformation from the above basic submodel to another submodel is, A 3D model rendering method comprising the step of determining conversion information for a conversion from a basic submodel of the above category to another submodel of the above category, wherein the other submodel of the above category is a submodel of the above category other than the basic submodel of the above category.
  4. In paragraph 3, The step of determining transformation information for a transformation from a basic submodel of the above category to another submodel of the above category is: Step of determining the first envelope box of the basic submodel of the above category - the first envelope box is 3D geometry surrounding the basic submodel -; A step of determining a second envelope box of another submodel of the above category - said second envelope box is 3D geometry surrounding said other submodel -; and A 3D model rendering method comprising the step of determining transformation information for a transformation from a basic submodel of the category to another submodel of the category based on the first envelope box and the second envelope box.
  5. In any one of paragraphs 1 through 4, The step of determining the previous level dataset based on the above next level dataset is, For any two cross-sectional data within the dataset of the next level, a step of determining cross-sectional data corresponding to the two cross-sectional data based on a submodel corresponding to the two cross-sectional data in response to the two cross-sectional data satisfying an aggregation condition; and A 3D model rendering method comprising the step of taking the candidate dataset as the previous level dataset in response to the amount of cross-sectional data in the candidate dataset being less than the reference amount, wherein the candidate dataset includes cross-sectional data corresponding to the two cross-sectional data satisfying the aggregation condition and cross-sectional data not satisfying the aggregation condition.
  6. In paragraph 5, The aggregation condition satisfied by the above two cross-sectional data is, The distance between the cross-sections indicated by the two cross-sectional data above is smaller than the distance threshold; and A 3D model rendering method comprising at least one of the angle of the normal vector between cross-sections indicated by the two cross-section data above being smaller than a threshold value.
  7. In paragraph 5, The above 3D model rendering method is, A 3D model rendering method further comprising the steps of: taking the candidate dataset as the next level dataset in response to the amount of cross-sectional data of the candidate dataset being greater than or equal to the reference amount; periodically performing a determination of cross-sectional data corresponding to the two cross-sectional data based on a submodel corresponding to the two cross-sectional data until the amount of cross-sectional data within the candidate dataset is less than the reference amount in response to the two cross-sectional data satisfying an aggregation condition; and taking the candidate dataset as the previous level dataset.
  8. In any one of paragraphs 1 through 4, The above 3D model rendering method is, A 3D model rendering method further comprising the step of, in response to not satisfying the above-mentioned set conditions, taking the above-mentioned previous level dataset as the above-mentioned next level dataset, periodically performing a determination of the above-mentioned previous level dataset based on the next level dataset until the above-mentioned set conditions are satisfied, and obtaining the above-mentioned cross-sectional dataset based on the above-mentioned two or more level datasets.
  9. In any one of paragraphs 1 through 4, Prior to the step of acquiring a texture map corresponding to the plurality of cross-sectional data within the cross-sectional dataset, the 3D model rendering method comprises: For any submodel corresponding to any cross-sectional data, a step of obtaining a texture map of said submodel by projecting the texture data of said submodel onto a cross-section indicated by said cross-sectional data; and A step of fusing texture maps of submodels corresponding to the cross-sectional data to obtain a texture map corresponding to the cross-sectional data. A 3D model rendering method that further includes
  10. In any one of paragraphs 1 through 4, The step of rendering the 3D model based on the plurality of cross-sectional data within the cross-sectional dataset and the corresponding texture maps is: For any cross-sectional data of the previous level, in response to the cross-sectional data satisfying a first rendering condition and a second rendering condition, a step of rendering the cross-sectional data according to a texture map corresponding to the cross-sectional data to obtain a rendering result for the cross-sectional data; and A 3D model rendering method comprising the step of obtaining a rendering result of the 3D model based on the rendering result of the plurality of cross-sectional data, in response to the plurality of cross-sectional data of the previous level satisfying the first rendering condition satisfying the second rendering condition.
  11. In Paragraph 10, The above 3D model rendering method is, In response to the fact that the previous level is not the lowest level among the two or more levels, and that the previous level includes target cross-sectional data that satisfies the first rendering condition but does not satisfy the second rendering condition, a plurality of cross-sectional data of the next level corresponding to the target cross-sectional data are periodically determined; the next level is taken as the previous level; and for any cross-sectional data of the previous level, in response to the fact that the cross-sectional data satisfies the first rendering condition and the second rendering condition, the cross-sectional data is rendered according to a texture map corresponding to the cross-sectional data until the plurality of cross-sectional data of the previous level that satisfies the first rendering condition satisfies the second rendering condition, thereby obtaining a rendering result corresponding to the cross-sectional data; and A step of obtaining a rendering result of the 3D model based on the rendering result of the plurality of cross-sectional data. A 3D model rendering method that further includes
  12. In Paragraph 10, The above 3D model rendering method is, A step of acquiring a target submodel corresponding to the target cross-section data in response to the fact that the previous level is the lowest level among the two or more levels, and the previous level includes target cross-section data that satisfies the first rendering condition but does not satisfy the second rendering condition; A step of rendering the target submodel to obtain a rendering result of the target submodel; and A step of obtaining a rendering result of the 3D model based on the rendering result of the plurality of cross-sectional data and the rendering result of the target submodel. A 3D model rendering method that further includes
  13. In Paragraph 12, The step of acquiring a target submodel corresponding to the above target cross-sectional data is, A step of acquiring the basic submodel in response to the fact that the target submodel corresponding to the above target cross-sectional data is the basic submodel; and A 3D model rendering method comprising the step of acquiring transformation information regarding a basic submodel and a transformation from the basic submodel to the other submodel in response to the target submodel corresponding to the target cross-sectional data being another submodel, and acquiring the other submodel based on the basic submodel and the transformation information.
  14. In Paragraph 13, A 3D model rendering method in which the above transformation information includes at least one of translation information, reduction/enlargement information, and rotation information.
  15. In Paragraph 10, The cross-sectional data satisfying the first rendering condition includes a cross-section indicated by the cross-sectional data within the view frustum; A 3D model rendering method in which the cross-sectional data satisfying the second rendering condition includes size data of the cross-sectional data on the screen that is not larger than a size threshold.
  16. As a 3D model rendering device, It includes an acquisition module and a rendering module, and The above acquisition module is configured to acquire a cross-sectional dataset—the cross-sectional dataset is constructed based on a plurality of submodels included in a 3D model of an object, wherein the plurality of submodels each represent a component of the object, the texture data of the plurality of submodels represents texture information of the component, the cross-sectional dataset includes a plurality of cross-sectional data, and each of the plurality of cross-sectional data is used to indicate a cross-section of an envelope box of one or more submodels—; The above acquisition module is additionally configured to acquire a texture map corresponding to the plurality of cross-sectional data within the cross-sectional data set, wherein any one of the texture maps is used to represent texture data obtained by projecting texture data of a submodel corresponding to the corresponding cross-sectional data onto a cross-section indicated by the corresponding cross-sectional data, and the corresponding cross-sectional data is a cross-sectional data among the plurality of cross-sectional data and corresponds to the texture map. The rendering module is configured to render the 3D model based on the plurality of cross-sectional data within the cross-sectional dataset and the corresponding texture map, and The above-mentioned cross-sectional dataset includes two or more levels of datasets, and any level of the above-mentioned dataset includes one or more cross-sectional data, and one cross-sectional data of the previous level corresponds to one or more cross-sectional data of the next level; The above 3D model rendering device is, Determining a previous level dataset based on a next level dataset—the next level dataset is determined based on cross-sectional data corresponding to the plurality of submodels in the 3D model in response to the next level being the lowest level among the two or more levels—; and In response to satisfying a setting condition, further configured to acquire the cross-sectional dataset based on the two or more levels of the dataset, 3D model rendering device.
  17. As an electronic device, Includes a processor and memory, The above memory stores one or more computer programs, and One or more of the above computer programs are loaded and executed by the processor to enable the electronic device to implement a 3D model rendering method according to any one of claims 1 to 4, Electronic device.
  18. As a computer-readable storage medium, Stores one or more computer programs, One or more of the above computer programs are loaded and executed by a processor to enable an electronic device to implement a 3D model rendering method according to any one of claims 1 to 4, Computer-readable storage medium.
  19. As a computer program stored on a computer-readable storage medium, Stores one or more computer programs, One or more of the above computer programs are loaded and executed by a processor to enable an electronic device to implement a 3D model rendering method according to any one of claims 1 to 4, A computer program stored on a computer-readable storage medium.
  20. delete

Description

3D model rendering method and device, electronic device, and storage medium The present application claims priority to Chinese Patent Application No. 202210996952.8, titled "3D MODEL RENDERING METHOD AND APPARATUS, ELECTRONIC DEVICE, AND STORAGE MEDIUM," filed on August 19, 2022, the entire contents of which are incorporated by reference into the present application. The embodiments of the present application relate to the field of image processing technology, and in particular to a 3D (3-dimensional) model rendering method and apparatus, electronic device and storage medium. As computer performance improves and image processing technology advances, it is common for objects to be represented as 3D models. For example, plants are represented as 3D plant models. Electronic devices can render 3D models to display objects represented as 3D models in scenes such as movies, games, and engineering designs. In the related technology, it is first necessary to model the 3D model of an object. The 3D model includes multiple submodels. The submodels are triangular meshes containing texture data. Each triangular mesh contains multiple triangles. The large data volume of the triangular meshes significantly affects rendering efficiency. Therefore, the triangular meshes can be simplified by merging at least two triangles into a single triangle, thereby simplifying the submodels and improving rendering efficiency. The rendered result of the 3D model is obtained by rendering the simplified submodels. In the above-described technology, the submodel may be deformed due to the simplification of the triangle mesh, which results in poor rendering effects of the 3D model. The present application provides a 3D model rendering method and apparatus, an electronic device, and a storage medium for solving the problem of poor rendering effects of 3D models in related technology. The technical solution includes the following. According to one aspect, a 3D model rendering method including the following is provided: A step of acquiring a cross-section dataset - said cross-section dataset is constructed based on a plurality of submodels included in a 3D model of an object, said plurality of submodels each represent a component of said object, said texture data of said plurality of submodels represents texture information of said component, said cross-section dataset includes a plurality of cross-section data, said plurality of cross-section data each is used to indicate a cross-section of an envelope box of one or more submodels -; A step of acquiring a texture map corresponding to the plurality of cross-sectional data within the cross-sectional data set above—wherein any one of the texture maps is used to represent texture data obtained by projecting texture data of a submodel corresponding to the corresponding cross-sectional data onto a cross-section indicated by the corresponding cross-sectional data, and the corresponding cross-sectional data is a cross-sectional data among the plurality of cross-sectional data and corresponds to the texture map—; and A step of rendering the 3D model based on the plurality of cross-sectional data within the cross-sectional dataset and the corresponding texture map. According to another aspect, a 3D model rendering device is provided, said 3D model rendering device includes an acquisition module and a rendering module, and The above acquisition module is configured to acquire a cross-sectional dataset—the cross-sectional dataset is constructed based on a plurality of submodels included in a 3D model of an object, wherein the plurality of submodels each represent a component of the object, the texture data of the plurality of submodels represents texture information of the component, the cross-sectional dataset includes a plurality of cross-sectional data, and each of the plurality of cross-sectional data is used to indicate a cross-section of an envelope box of one or more submodels—; The above acquisition module is additionally configured to acquire a texture map corresponding to the plurality of cross-sectional data within the cross-sectional data set, wherein any one of the texture maps is used to represent texture data obtained by projecting texture data of a submodel corresponding to the corresponding cross-sectional data onto a cross-section indicated by the corresponding cross-sectional data, and the corresponding cross-sectional data is a cross-sectional data among the plurality of cross-sectional data and corresponds to the texture map. The rendering module is configured to render the 3D model based on the plurality of cross-sectional data within the cross-sectional dataset and the corresponding texture map. According to another aspect, an electronic device is provided, said electronic device includes a processor and memory, said memory stores one or more computer programs, said one or more computer programs are loaded and executed by said processor to enable said electronic device to implement