EP-4395308-B1 - COMPUTER SYSTEM AND METHOD FOR 3D SCENE GENERATION
Inventors
- FODA, MOHAMMAD
Dates
- Publication Date
- 20260506
- Application Date
- 20230105
Claims (6)
- A computer system (600) for three-dimensional (3D) scene generation, comprising: a processing unit (601), running an application program (301, 608); and a graphics processing unit (GPU) (303, 602), coupled to the processing unit (601), and configured to generate an image pair according to rendering instructions sent from the processing unit (601); wherein the processing unit (601) is configured to execute the following operations: obtaining (S401) a first convergence parameter from a configuration file that corresponds to the application program (301, 608); extracting (S402) depth information from function calls sent from the application program (301, 608) to a graphics driver (302, 607), and determining minimum depth based on the depth information; modifying (S404) the function calls to apply a second convergence parameter; and converting (S405) the modified function calls into rendering instructions and sending the rendering instructions to the GPU (303, 602), by running the graphics driver (302, 607); characterized in that the processing unit (601) is further configured to execute the following operations: determining (S501) whether the minimum depth is smaller than the first convergence parameter; in response to the minimum depth being smaller than the first convergence parameter, calculating (S502) the second convergence parameter by adding a fraction of the minimum depth to the minimum depth; in response to the minimum depth not being smaller than the first convergence parameter, calculating (S503) the second convergence parameter by adding the fraction of the minimum depth to the first convergence parameter; wherein the processing unit (601) is further configured to calculate the fraction of the minimum depth based on a formula of: α × min _ depth = min _ depth × popout _ bias × screen _ width IPD wherein min_depth is the minimum depth, α×min_depth is the fraction of the minimum depth, popout _bias is a predefined constant value, the screen_width is a screen width of a real screen of the computer system, and the IPD is an interpupillary distance.
- The computer system (600) as claimed in claim 1, wherein the GPU (303, 602) is further configured to generate a stereoscopic image based on the image pair, and to send the stereoscopic image to an autostereoscopic display device (304) to display the stereoscopic image.
- The computer system (600) as claimed in claim 1, wherein the GPU (303, 602) is further configured to send the image pair to a wearable device that includes a pair of display panels for displaying the image pair.
- A method (400) for three-dimensional (3D) scene generation, carried out by a computer system (600) running an application program (301, 608), the method comprising: obtaining (S401) a first convergence parameter from a configuration file that corresponds to the application program (301, 608); extracting (S402) depth information from function calls sent from the application program (301, 608) to a graphics driver (302, 607), and determining minimum depth based on the depth information; modifying (S404) the function calls to apply the second convergence parameter; and converting (S405) the modified function calls into rendering instructions and sending the rendering instructions to the GPU (303, 602), by running the graphics driver (302, 607); and generating (S406) an image pair according to the rendering instructions characterized in that the method further comprises: determining (S501) whether the minimum depth is smaller than the first convergence parameter; in response to the minimum depth being smaller than the first convergence parameter, calculating (S502) the second convergence parameter by adding a fraction of the minimum depth to the minimum depth; in response to the minimum depth not being smaller than the first convergence parameter, calculating (S503) the second convergence parameter by adding the fraction of the minimum depth to the first convergence parameter; wherein the step of calculating the second convergence parameter based on the first convergence parameter and the minimum depth comprises: calculating the fraction of the minimum depth based on a formula of: α × min _ depth = min _ depth × popout _ bias × screen _ width IPD wherein min_depth is the minimum depth, α×min_depth is the fraction of the minimum depth, popout_bias is a predefined constant value, the screen_width is a screen width of a real screen of the computer system, and the IPD is an interpupillary distance.
- The method (400) as claimed in claim 4, further comprising: generating a stereoscopic image based on the image pair, and sending the stereoscopic image to an autostereoscopic display device (304) to display the stereoscopic image.
- The method (400) as claimed in claim 4, further comprising: sending the image pair to a wearable device including a pair of display panels for displaying the image pair.
Description
BACKGROUND OF THE INVENTION Field of the Invention The present disclosure relates in general to image processing techniques, and it relates in particular to three-dimensional (3D) scene generation. Description of the Related Art As the eyes are in different positions on the head, they present different views simultaneously. The principle of stereoscopic 3D imaging is to make the right and left eyes see slightly different images, so that the brain exploits the parallax due to the different views from the eye to gain depth perception and estimate distance. Parallax brings depth perception, yet too much parallax may lead to the effect of vergence accommodation conflict (VAC), which can cause discomfort such as visual fatigue and eye strain, or even daze some viewers who are not used to 3D visualization effects. Therefore, it is crucial to control the amount of parallax in stereoscopic 3D imaging for 3D visual applications. US 10 241 329 B2 discloses a method of operation in a near-eye display system includes determining, using an eye tracking component of the near-eye display system, a pose of a user's eye. A shift vector is determined for a magnifier lens of the near-eye display system based on the pose of the user's eye, and the shift vector is communicated to an actuator of the near-eye display system to instruct translation of the magnifier lens relative to the user's eye. After translation of the magnifier lens, an array of elemental images is rendered at a position within a near-eye lightfield frame and communicated for display at a display panel of the near-eye display system. T. SHIBATA ET AL: "The zone of comfort: Predicting visual discomfort with stereo displays",JOURNAL OF VISION, vol. 11, no. 8, 21 July 2011 (2011-07-21), investigates whether vergence-accommodation conflicts are a cause of discomfort in stereo displays and provides analysis of comfort metrics and practical implications. It discusses quantitative formulations used by practitioners, including expressing disparity as a percentage of screen width and relationships tying vergence measures (e.g. prism diopters) to interocular distance and stimulus distance. SAMMY ROGMANS ET AL: "Biological-aware stereoscopic rendering in free viewpoint technology using GPU computing",3DTV-CONFERENCE: THE TRUE VISION - CAPTURE, TRANSMISSION AND DISPLAY OF 3D VIDEO (3DTV-CON), 2010, IEEE, PISCATAWAY, NJ, USA, 7 June 2010 (2010-06-07), pages 1-4, describes a biological-aware stereoscopic renderer for close-range video communication using free-viewpoint technology, where depth information is extracted (e.g. via plane sweep / depth-map histogram) to estimate accommodation distance. It then adapts vergence/convergence (including an IPD-dependent convergence computation) using a control loop to keep stereopsis within Percival's zone of comfort, and is implemented for real-time GPU operation. In 3D scenes, the convergence value and the vertex depth are two key factors that affect the amount of parallax. The vertex depth is given by the application program, depending on the image content that the application program is designed to present to a viewer's eyes, so it varies from application program to application program, and from scene to scene. Thus, a fixed convergence value, as well as a static range of convergence values, fail to accommodate to the variable vertex depth for various application programs and various scenes. When an object comes too close to the virtual eyes (or virtual cameras) in a virtual 3D space, excessive parallax will be produced. Excessive parallax may cause the viewer's eyes see two excessively separated images instead of one stereoscopic object popping out, and have difficulties converging them. Therefore, it is desirable to have a solution for 3D scene generation that is capable of controlling the parallax and ensuring it remains at an appropriate level regardless of how close objects come from scene to scene. BRIEF SUMMARY OF THE INVENTION The abovementioned problem is solved by a computer system according to appended claim 1, and a method for three-dimensional (3D) scene generation according to appended claim 4. Advantageous embodiments are the subject of the dependent claims. By adapting the convergence value to the vertex depth of the object in the virtual 3D space, the computer system and method for 3D scene generation provided by embodiments of the present disclosure are capable of controlling to remain at an appropriate level regardless of how close objects come from scene to scene. Thus, the effect of vergence accommodation conflict (VAC) caused by excessive parallax can be avoided. BRIEF DESCRIPTION OF THE DRAWINGS The present disclosure can be more fully understood by reading the subsequent detailed description and examples with references made to the accompanying drawings, wherein: FIG. 1A illustrates the side view of an exemplary virtual 3D space;FIG. 1B illustrates the side view of another exemplary virtual 3D space;FIG. 2