Search

CN-121353507-B - Image rendering method, device, equipment and storage medium

CN121353507BCN 121353507 BCN121353507 BCN 121353507BCN-121353507-B

Abstract

The present disclosure provides an image rendering method, apparatus, device, and storage medium. Relates to the technical field of computers, in particular to the technical field of image rendering and the like. The method comprises the steps of obtaining model data in a target three-dimensional scene, wherein the model data comprise triangular patches and two-dimensional Gaussian points, constructing a plurality of emitting lines under a current view angle for a virtual camera of the target three-dimensional scene, detecting points to be rendered meeting target conditions in the target three-dimensional scene based on the emitting lines, rendering target pixels corresponding to the emitting lines to a target canvas based on color values of the target two-dimensional Gaussian points when the points to be rendered belong to the target two-dimensional Gaussian points, rendering the target pixels to the target canvas by adopting the color values of the target triangular patches when the points to be rendered belong to the target triangular patches, and outputting the target pixels respectively corresponding to the emitting lines on the target canvas to obtain a target image formed by mixed rendering of the two-dimensional Gaussian points and the triangular patches.

Inventors

  • ZHU YUE
  • HUANG XIAOHUANG
  • ZHU HAO

Assignees

  • 杭州群核信息技术有限公司

Dates

Publication Date
20260508
Application Date
20251216

Claims (11)

  1. 1. An image rendering method, comprising: Obtaining model data in a target three-dimensional scene, wherein the model data comprises triangular patches and two-dimensional Gaussian points, and the triangular patches are basic renderable units of the three-dimensional model data; Constructing a plurality of transmitting lines under the current view angle for a virtual camera of the target three-dimensional scene; For each transmission line, performing respectively: Detecting a point to be rendered meeting a target condition in a target three-dimensional scene based on the emitting line, wherein the target condition comprises that the point to be rendered is an intersection point of the emitting line and a three-dimensional model expressed by model data in the target three-dimensional scene, and the point to be rendered is nearest to the virtual camera in all intersection points of the emitting line and the three-dimensional model; Aiming at two-dimensional Gaussian points in the model data, the intersection point is an intersection point of the transmitting line and an imaginary patch, the imaginary patch is a triangular patch created for the two-dimensional Gaussian points in advance, the two-dimensional Gaussian points are graphic primitives defined by two-dimensional elliptic Gaussian distribution, and the imaginary patch is not a triangular patch actually adopted in the target three-dimensional scene and is an imaginary patch used for auxiliary calculation; under the condition that the point to be rendered belongs to a target two-dimensional Gaussian point, rendering a target pixel corresponding to the transmitting line to a target canvas based on a color value of the target two-dimensional Gaussian point; Under the condition that the point to be rendered belongs to a target triangular patch, rendering the target pixel to the target canvas by adopting a color value of the target triangular patch; And outputting target pixels corresponding to the multiple emission lines on the target canvas respectively to obtain a target image formed by mixing and rendering two-dimensional Gaussian points and triangular patches.
  2. 2. The method of claim 1, wherein the detecting a point to be rendered that satisfies a target condition in the target three-dimensional scene comprises: acquiring a first set, wherein the first set comprises a plurality of triangular patches in the target three-dimensional scene; Determining triangular patches which intersect the transmitting line and are closest to the virtual camera in the first set to obtain a target triangular patch; Searching a target two-dimensional Gaussian point which is intersected with the transmitting line and closest to the virtual camera in a second set in a target distance range, wherein the distance between the virtual camera and the target triangular patch is the target distance range; Under the condition that the target two-dimensional Gaussian point is found, the target two-dimensional Gaussian point is used as the point to be rendered; and under the condition that the target two-dimensional Gaussian point is not found, taking the target triangular patch as the point to be rendered.
  3. 3. The method of claim 2, wherein the finding a target two-dimensional gaussian point in a second set that intersects the emission line and is closest to the virtual camera within a target distance range comprises: Searching an imaginary patch which is intersected with the transmitting line and closest to the virtual camera in the target distance range, wherein the imaginary patch is a triangular patch which is created in advance for each two-dimensional Gaussian point in the target three-dimensional scene, each two-dimensional Gaussian point in the second set is created with a corresponding imaginary patch, and each two-dimensional Gaussian point is expressed by adopting two corresponding imaginary patches; and under the condition that the imaginary patch is found and the intersection point of the transmitting line and the imaginary patch is in the elliptical range of the candidate two-dimensional Gaussian point corresponding to the imaginary patch, determining the candidate two-dimensional Gaussian point as the found target two-dimensional Gaussian point.
  4. 4. A method according to claim 3, further comprising: Determining that the target two-dimensional Gaussian point is not found within a target distance range between the virtual camera and the target triangular patch if any one of the following sets of conditions is satisfied; the condition set includes: Condition 1), not finding the fictional patches; Condition 2), finding out the fictitious patch, wherein the intersection point of the transmitting line and the fictitious patch is not in the elliptical range of the candidate two-dimensional Gaussian point corresponding to the fictitious patch.
  5. 5. A method according to claim 3, wherein, for any two-dimensional gaussian point, the imaginary patch created for the two-dimensional gaussian point satisfies the following requirement: Two imaginary patches of the two-dimensional Gaussian points are formed into a minimum circumscribed rectangle of an ellipse of the two-dimensional Gaussian points; the vertex of the imaginary patch of the two-dimensional Gaussian points is expressed by adopting two axis parameters of the two-dimensional Gaussian points; the ellipse of the two-dimensional Gaussian point is expressed by the gravity center of the fictive patch, and the gravity center of the fictive patch is provided by parameters built in Fu Erkan Vulkan.
  6. 6. The method of any of claims 2-5, further comprising: Determining a distance from the target two-dimensional Gaussian point to the target triangular patch along the direction of the emitting line as a new target distance range under the condition that the transparency of the target two-dimensional Gaussian point is higher than a preset threshold value, determining the position of the target two-dimensional Gaussian point as a new position of the virtual camera, and returning to the step of executing the target two-dimensional Gaussian point which is intersected with the emitting line and closest to the virtual camera in the second set in the target distance range until any one of the following termination conditions is met; the cyclic operation is carried out for n times, wherein n is a preset positive integer; no triangular patches and no new target two-dimensional gaussian points are found within the new target distance range.
  7. 7. The method of claim 6, wherein the rendering the target pixel corresponding to the transmit line to a target canvas based on the color value of the target two-dimensional gaussian point comprises: Under the condition that a plurality of target two-dimensional Gaussian points are found through cyclic operation, respectively determining rendering values of the target two-dimensional Gaussian points based on respective transparency and color values of the target two-dimensional Gaussian points; Accumulating the rendering values of the two-dimensional Gaussian points of each target to obtain the color value of the target pixel; the color value is populated to the target pixel location of the target canvas.
  8. 8. An image rendering apparatus comprising: The acquisition module is used for acquiring model data in a target three-dimensional scene, wherein the model data comprises triangular patches and two-dimensional Gaussian points; The construction module is used for constructing a plurality of transmitting lines under the current view angle for the virtual camera of the target three-dimensional scene; The construction module is specifically configured to perform, for each emission line, respectively: Detecting a point to be rendered meeting a target condition in a target three-dimensional scene based on the emitting line, wherein the target condition comprises that the point to be rendered is an intersection point of the emitting line and a three-dimensional model expressed by model data in the target three-dimensional scene, and the point to be rendered is nearest to the virtual camera in all intersection points of the emitting line and the three-dimensional model; Aiming at two-dimensional Gaussian points in the model data, the intersection point is an intersection point of the transmitting line and an imaginary patch, the imaginary patch is a triangular patch created for the two-dimensional Gaussian points in advance, the two-dimensional Gaussian points are graphic primitives defined by two-dimensional elliptic Gaussian distribution, and the imaginary patch is not a triangular patch actually adopted in the target three-dimensional scene and is an imaginary patch used for auxiliary calculation; under the condition that the point to be rendered belongs to a target two-dimensional Gaussian point, rendering a target pixel corresponding to the transmitting line to a target canvas based on a color value of the target two-dimensional Gaussian point; Under the condition that the point to be rendered belongs to a target triangular patch, rendering the target pixel to the target canvas by adopting a color value of the target triangular patch; And the rendering module is used for outputting target pixels corresponding to the plurality of transmitting lines on the target canvas respectively so as to obtain a target image formed by mixing and rendering the two-dimensional Gaussian points and the triangular patches.
  9. 9. An electronic device, comprising: At least one processor, and A memory communicatively coupled to the at least one processor, wherein, The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-7.
  10. 10. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-7.
  11. 11. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any of claims 1-7.

Description

Image rendering method, device, equipment and storage medium Technical Field The present disclosure relates to the field of computer technology, and in particular, to the technical field of image rendering, ray tracing, and the like. Background As image rendering technology continues to mature, it has been widely used in a number of industries. The image rendering technique is capable of converting a three-dimensional geometric model in a three-dimensional scene into a two-dimensional map with visual effects that can be displayed by a display device. Disclosure of Invention The present disclosure provides an image rendering method, apparatus, device, and storage medium to solve or alleviate one or more technical problems in the prior art. According to an aspect of the present disclosure, there is provided an image rendering method including: Obtaining model data in a target three-dimensional scene, wherein the model data comprises a triangular surface patch and two-dimensional Gaussian points; Constructing a plurality of transmitting lines under the current view angle for a virtual camera of the target three-dimensional scene; For each transmission line, performing respectively: Detecting a point to be rendered meeting a target condition in a target three-dimensional scene based on the emitting line, wherein the target condition comprises that the point to be rendered is an intersection point of the emitting line and a three-dimensional model expressed by model data in the target three-dimensional scene, and the point to be rendered is nearest to the virtual camera in all intersection points of the emitting line and the three-dimensional model; under the condition that the point to be rendered belongs to a target two-dimensional Gaussian point, rendering a target pixel corresponding to the transmitting line to a target canvas based on a color value of the target two-dimensional Gaussian point; Under the condition that the point to be rendered belongs to a target triangular patch, rendering the target pixel to the target canvas by adopting a color value of the target triangular patch; And outputting target pixels corresponding to the multiple emission lines on the target canvas respectively to obtain a target image formed by mixing and rendering two-dimensional Gaussian points and triangular patches. According to another aspect of the present disclosure, there is provided an image rendering apparatus including: the acquisition module is used for acquiring model data in the target three-dimensional scene, wherein the model data comprises a triangular patch and two-dimensional Gaussian points; The construction module is used for constructing a plurality of transmitting lines under the current view angle for the virtual camera of the target three-dimensional scene; The construction module is specifically configured to perform, for each emission line, respectively: Detecting a point to be rendered meeting a target condition in a target three-dimensional scene based on the emitting line, wherein the target condition comprises that the point to be rendered is an intersection point of the emitting line and a three-dimensional model expressed by model data in the target three-dimensional scene, and the point to be rendered is nearest to the virtual camera in all intersection points of the emitting line and the three-dimensional model; under the condition that the point to be rendered belongs to a target two-dimensional Gaussian point, rendering a target pixel corresponding to the transmitting line to a target canvas based on a color value of the target two-dimensional Gaussian point; Under the condition that the point to be rendered belongs to a target triangular patch, rendering the target pixel to the target canvas by adopting a color value of the target triangular patch; And the rendering module is used for outputting target pixels corresponding to the plurality of transmitting lines on the target canvas respectively so as to obtain a target image formed by mixing and rendering the two-dimensional Gaussian points and the triangular patches. It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification. Drawings In the drawings, the same reference numerals refer to the same or similar parts or elements throughout the several views unless otherwise specified. The figures are not necessarily drawn to scale. It is appreciated that these drawings depict only some embodiments provided according to the disclosure and are not to be considered limiting of its scope. Fig. 1 is a flow diagram of an image rendering method according to a first embodiment of the present disclosure; FIG. 2 is a flow diagram of detecting points to be rendered that meet a target condition according