Search

CN-122023623-A - Image rendering method and device, storage medium and electronic equipment

CN122023623ACN 122023623 ACN122023623 ACN 122023623ACN-122023623-A

Abstract

The application discloses an image rendering method and device, a storage medium and electronic equipment. The method comprises the steps of obtaining depth information of each vertex in a three-dimensional model corresponding to an object to be rendered, inversely mapping pixel points captured in real time from a screen space to a view space based on the depth information, performing projective transformation on three-dimensional coordinate points in the view space based on an original projection matrix to obtain global distortion, correcting the original projection matrix to obtain a corrected projection matrix under the condition that the global distortion is larger than a first threshold, determining target color values of each pixel point based on the corrected projection matrix and screen curvature, and rendering the object to be rendered based on the target color values, wherein the screen curvature represents the depth change rate of the pixel points in a local image area. The application solves the technical problem of lower visual quality of the rendered image in the real-time rendered scene.

Inventors

  • CHEN CONG
  • SU XIAO

Assignees

  • 湖南快乐阳光互动娱乐传媒有限公司

Dates

Publication Date
20260512
Application Date
20260123

Claims (12)

  1. 1. An image rendering method, comprising: obtaining depth information of each vertex in a three-dimensional model corresponding to an object to be rendered; Inversely mapping pixel points captured in real time from a screen space to a view space based on the depth information, and performing projective transformation on three-dimensional coordinate points in the view space based on an original projection matrix to obtain global distortion, wherein the global distortion represents geometric distortion generated by an image area formed by the pixel points and adjacent pixel points after projective transformation; correcting the original projection matrix under the condition that the global distortion is larger than a first threshold value to obtain a corrected projection matrix; And determining a target color value of each pixel point based on the corrected projection matrix and a screen curvature, and rendering the object to be rendered based on the target color value, wherein the screen curvature represents the depth change rate of the pixel point in a local image area.
  2. 2. The method according to claim 1, wherein the projective transformation of the three-dimensional coordinate points in the view space based on the original projection matrix to obtain the global distortion comprises: based on an original projection matrix, projecting the three-dimensional coordinate point to the screen space to obtain a neighborhood distance parameter for representing the distance between the pixel point and the adjacent pixel point; Determining a screen gradient index of the pixel point based on the neighborhood distance parameter, wherein the screen gradient index represents local distortion generated by the local image area formed by the pixel point and the adjacent pixel point after being projected to the screen space; And obtaining the global distortion by carrying out weighted summation on the local distortion.
  3. 3. The method of claim 2, wherein determining the screen gradient index for the pixel point based on the neighborhood distance parameter comprises: Constructing a jacobian matrix based on the neighborhood distance parameter, the screen coordinates of the pixel points and the gradient sampling radius; And obtaining partial derivatives of each element in the jacobian matrix, and determining the sum of Euclidean norms of the partial derivatives of each element as a screen gradient index of the pixel point.
  4. 4. The method of claim 2, wherein the obtaining the global distortion by weighted summing the local distortion comprises: Dynamically acquiring a target fixation point from a visual attention area which changes in real time; acquiring a fixation point coordinate of the target fixation point, wherein the fixation point coordinate is coordinate data under a screen coordinate system; Constructing a target weight graph based on the gaze point coordinates, wherein the target weight graph comprises weight values of the local distortion amounts of all pixel points determined by taking the target gaze point as a center; And carrying out weighted summation on the local distortion based on the weight value to obtain the global distortion.
  5. 5. The method of claim 1, wherein modifying the original projection matrix if the global distortion is greater than a first threshold comprises: Constructing a correction matrix based on an adjustment amount for performing incremental adjustment on key parameters of the original projection matrix, wherein the key parameters comprise displacement, scaling and rotation angle of the pixel points; Constructing a target optimization function for compensating the global distortion based on the correction matrix, the correction intensity and the screen coordinates of the pixel points, wherein the correction intensity and the global distortion are positively correlated; traversing each pixel point in the screen space based on the target optimization function, and determining an optimal parameter vector which enables the global distortion to be minimum; And determining the product between the optimal parameter vector and the original projection matrix as the corrected projection matrix.
  6. 6. The method of claim 5, wherein traversing each pixel point in the screen space based on the objective optimization function determines an optimal parameter vector that minimizes the global distortion, comprising: determining a set of parameter vectors minimizing the target optimization function by traversing the screen coordinates of the pixel points; And sorting the group of parameter vectors based on the element values of the parameter vectors, and taking the parameter vector with the minimum value in the sorting result as the optimal parameter vector.
  7. 7. The method of claim 1, wherein determining the target color value for each pixel based on the modified projection matrix and screen curvature comprises: Based on the corrected projection matrix, projecting the three-dimensional coordinate points to the screen space to obtain original color values of the pixel points; mapping the screen curvature into a mixed weight, and identifying a first local image region with a higher depth change rate and a second local image region with a lower depth change rate based on the screen curvature when the mixed weight is greater than a second threshold; the pixel points in the first local image area and the sub-pixels of the pixel points in the second local image area are subjected to differential sampling to obtain multiple sampling color values; and carrying out weighted summation on the original color value and the multisampled color value based on the mixed weight to obtain the target color value.
  8. 8. The method of claim 1, wherein determining the target color value for each pixel based on the modified projection matrix and screen curvature further comprises: In the process of carrying out projective transformation on the three-dimensional coordinate points based on the original projection matrix, searching the initial color values of all the pixel points based on the texture coordinates of the three-dimensional model surface; Mapping the screen curvature to a mixed weight; Under the condition that the mixing weight is greater than a second threshold value, performing differential sampling on a plurality of sub-pixels of each pixel point based on the curvature of the screen to obtain a multi-sampling color value, wherein the higher the curvature of the screen is, the more the number of the plurality of sub-pixels need to be sampled is; And carrying out weighted summation on the initial color value and the multisampling color value based on the mixed weight to obtain the target color value.
  9. 9. The method of claim 1, wherein determining the target color value for each pixel based on the modified projection matrix and screen curvature further comprises: Mapping the screen curvature to a mixed weight; determining pixel sampling positions of the pixel points based on the corrected projection matrix under the condition that the mixing weight is larger than a second threshold value; determining sub-pixel sampling positions of the pixel points based on the pixel sampling positions and the screen curvature; based on the sub-pixel sampling positions, carrying out differential sampling on the sub-pixels of each pixel point to obtain multiple sampling color values; And carrying out weighted summation on the pre-cached initial color value and the multisampling color value based on the mixed weight to obtain the target color value.
  10. 10. An image rendering apparatus, comprising: the first acquisition unit is used for acquiring depth information of each vertex in the three-dimensional model corresponding to the object to be rendered; The first processing unit is used for inversely mapping the pixel points captured in real time from the screen space to the view space based on the depth information, and carrying out projective transformation on the three-dimensional coordinate points in the view space based on an original projection matrix to obtain global distortion, wherein the global distortion represents the geometric distortion generated by an image area formed by the pixel points and adjacent pixel points after projective transformation; the correction unit is used for correcting the original projection matrix to obtain a corrected projection matrix under the condition that the global distortion is larger than a first threshold value; And the second processing unit is used for determining target color values of all pixel points based on the corrected projection matrix and screen curvature, and rendering the object to be rendered based on the target color values, wherein the screen curvature represents the depth change rate of the pixel points in a local image area.
  11. 11. A computer-readable storage medium, characterized in that the computer-readable storage medium comprises a stored program, wherein the program is executable by a terminal device or a computer to perform the method as claimed in any one of claims 1 to 9.
  12. 12. An electronic device comprising a memory and a processor, characterized in that the memory has stored therein a computer program, the processor being arranged to execute the method according to any of claims 1 to 9 by means of the computer program.

Description

Image rendering method and device, storage medium and electronic equipment Technical Field The present application relates to the field of image processing technologies, and in particular, to an image rendering method and apparatus, a storage medium, and an electronic device. Background In real-time rendering of a scene, three-dimensional coordinate points in view space are typically projected into screen space using fixed projection matrix parameters, which tend to create significant geometric distortion on wide-angle edge regions or mobile devices. In order to solve the above problems, the related art generally adopts hardware-level antialiasing techniques, image-space-based post-processing antialiasing techniques, and the like to solve the above distortion problems. The former samples only the edges of the polygon, thus smoothing the object contour with little increase in coloring calculation, which makes it of limited processing effect on transparent materials and dynamic objects, and high memory and calculation overhead. The latter locates the jagged region mainly by edge detection and pattern recognition, which simple processing scheme is prone to ghost shadows when dealing with complex and high frequency geometric details. Therefore, the traditional distortion processing method is mostly dependent on fixed projection parameters and full-screen post-processing filtering, and cannot fundamentally solve the problems of geometrical distortion and saw-tooth at the edge of the screen in a complex motion scene, so that the technical problem of lower visual quality of a rendered image is caused. In view of the above problems, no effective solution has been proposed at present. Disclosure of Invention The embodiment of the application provides an image rendering method and device, a storage medium and electronic equipment, which are used for at least solving the technical problem of lower visual quality of a rendered image in a real-time rendering scene. According to one aspect of the embodiment of the application, an image rendering method is provided, which comprises the steps of obtaining depth information of each vertex in a three-dimensional model corresponding to an object to be rendered, inversely mapping pixel points captured in real time from a screen space to a view space based on the depth information, and carrying out projective transformation on three-dimensional coordinate points in the view space based on an original projection matrix to obtain global distortion, wherein the global distortion represents geometric deformation generated by an image area formed by the pixel points and adjacent pixel points thereof after projective transformation, and correcting the original projection matrix to obtain a corrected projection matrix when the global distortion is larger than a first threshold value, and determining a target color value of each pixel point based on the corrected projection matrix and screen curvature, and rendering the object to be rendered based on the target color value, wherein the screen curvature represents the depth change rate of the pixel points in a local image area. The method comprises the steps of obtaining a global distortion amount by projecting three-dimensional coordinate points into a screen space based on an original projection matrix, obtaining a neighborhood distance parameter used for representing the distance between the pixel points and adjacent pixel points, determining a screen gradient index of the pixel points based on the neighborhood distance parameter, wherein the screen gradient index represents the local distortion amount generated by the local image area formed by the pixel points and the adjacent pixel points after projection into the screen space, and obtaining the global distortion amount by carrying out weighted summation on the local distortion amount. Optionally, the determining the screen gradient index of the pixel point based on the neighborhood distance parameter includes constructing a jacobian matrix based on the neighborhood distance parameter, the screen coordinates of the pixel point and the gradient sampling radius, obtaining partial derivatives of each element in the jacobian matrix, and determining a sum of euclidean norms of the partial derivatives of each element as the screen gradient index of the pixel point. The global distortion is obtained by carrying out weighted summation on the local distortion, and the global distortion is obtained by dynamically obtaining a target gazing point from a visual attention area which changes in real time, obtaining gazing point coordinates of the target gazing point, wherein the gazing point coordinates are coordinate data under a screen coordinate system, constructing a target weight graph based on the gazing point coordinates, wherein the target weight graph comprises weight values of the local distortion of each pixel point determined by taking the target gazing point as a center, and carryin