Search

CN-121982217-A - Lightweight virtual scene self-adaptive generation and rendering method for broadcast television production

CN121982217ACN 121982217 ACN121982217 ACN 121982217ACN-121982217-A

Abstract

The invention discloses a lightweight virtual scene self-adaptive generation and rendering method for broadcast television production, which relates to the technical fields of broadcast television production, computer graphics and virtual studio. The invention obviously reduces the manufacturing threshold and the production period of the virtual scene, realizes the frame rate stability and the broadcasting safety in the medium-low end hardware environment, optimizes the computing resource allocation, ensures the core visual experience when the performance is limited, improves the asset multiplexing rate, and reduces the operation cost of a broadcast and television mechanism.

Inventors

  • WANG ZHENQIANG
  • HU YALONG
  • Sui Xiancai
  • CHU XIAO
  • ZHU DENGMING
  • WANG WEI
  • FENG BING
  • Gu Binzhang

Assignees

  • 青岛广播电视综合信息中心有限公司
  • 太仓中科信息技术研究院

Dates

Publication Date
20260505
Application Date
20260206

Claims (9)

  1. 1. The method for adaptively generating and rendering the lightweight virtual scene for radio and television production is characterized by comprising the following steps of: Carrying out topological structure loading and asset instantiation according to a scene configuration file selected or input by a user, wherein the scene configuration file comprises a scene topological structure ID, an asset index list and initial attribute parameters; Utilizing initial attribute parameters in the scene configuration file to customize the appearance of the instantiated scene; In the rendering cycle, a performance monitoring thread is started in parallel, running state data of the current computing equipment are collected in real time, and a load index is calculated; According to the calculated load index, matching the optimal rendering strategy level in a preset hierarchical rendering configuration tree, and performing differentiated rendering by utilizing the visual focusing region weight, thereby completing the self-adaptive generation and rendering of the virtual scene.
  2. 2. The method for adaptively generating and rendering the lightweight virtual scene for radio and television production according to claim 1, which is characterized in that: When the topological structure loading is executed, a corresponding basic scene framework is called from a locally preset lightweight asset library according to the topological structure ID, wherein the basic scene framework is used for defining the spatial coordinate relation of the virtual camera position, the host station position and the background large screen, but does not contain high-precision texture and illumination information.
  3. 3. The method for adaptively generating and rendering the lightweight virtual scene for radio and television production according to claim 2, which is characterized in that: When asset instantiation is executed, analyzing an asset index list, and instantiating a specific prop model to a preset anchor point of a scene skeleton, wherein the assets are managed through an object pool technology and are used for avoiding the cost caused by repeatedly creating destruction.
  4. 4. The method for adaptively generating and rendering a lightweight virtual scene for radio and television production according to claim 3, wherein the method comprises the following steps: When the appearance of the instantiated scene is customized, traversing all model grids in the scene, dynamically updating material instance parameters in a rendering pipeline according to material parameters in a configuration file, and adjusting the intensity, color and attenuation radius of the virtual light source according to illumination parameters in the configuration file.
  5. 5. The method for adaptively generating and rendering the lightweight virtual scene for radio and television production according to claim 4, which is characterized in that: In calculating the load index, the load index Expressed as: ; Wherein, the A time is generated for the average frame of the last 30 frames, For the target frame time to be a target frame time, For GPU utilization, w 1 ,w 2 is a preset weight coefficient.
  6. 6. The method for adaptively generating and rendering the lightweight virtual scene for radio and television production according to claim 5, which is characterized in that: When the optimal rendering strategy level is matched, the load index is used for the rendering Making a status determination, comprising: If it is (High load threshold), triggering a degradation mechanism; If it is (Low load threshold), triggering an upgrade mechanism; Otherwise, maintaining the current strategy; And generating a strategy according to the state judgment result, wherein the strategy comprises the following three-level rendering strategies: The high image quality mode Level 0 is that real-time dynamic shadow, full resolution texture, high-Level antialiasing and screen space reflection starting are started; Balance mode Level 1, reducing shadow mapping resolution, closing SSR, and enabling medium antialiasing; And the performance mode Level 2 is to close dynamic shadow, switch the model to a low mode and reduce texture resolution of the non-core area.
  7. 7. The method for adaptively generating and rendering the lightweight virtual scene for radio and television production according to claim 6, wherein the method is characterized by comprising the following steps: when performing differentiated rendering with visual focus area weights, a visual attention weighting mechanism is introduced when performing rendering policies.
  8. 8. The method for adaptively generating and rendering the lightweight virtual scene for radio and television production according to claim 7, wherein the method is characterized by comprising the following steps of: When the visual focusing region weight is utilized to execute differential rendering, a rendered picture is divided into a core focusing region and an edge background region, wherein the core focusing region is a screen center and presenter image matting synthetic region, and the picture area is 40% -60%; In the rendering pipeline, rendering quality of the current strategy level or higher is always kept for the core attention area, and rendering configuration lower than the current strategy by one level is forcedly executed for the edge background area; and performing color key synthesis on the virtual background layer subjected to differential rendering and the acquired real-time camera video stream, and outputting a final program signal.
  9. 9. The system for adaptively generating and rendering the lightweight virtual scene based on the parameterized template is used for realizing the lightweight virtual scene adaptive generation and rendering method for radio and television oriented production as claimed in claim 1, and is characterized by comprising the following steps: The data acquisition and processing module is used for loading a topological structure and executing asset instantiation according to a scene configuration file selected or input by a user, wherein the scene configuration file comprises a scene topological structure ID, an asset index list and initial attribute parameters; the appearance customization module is used for customizing the appearance of the instantiated scene by utilizing the initial attribute parameters in the scene configuration file; The state acquisition module is used for parallelly starting a performance monitoring thread in a rendering cycle, acquiring running state data of the current computing equipment in real time and calculating a load index; The rendering module is used for matching the optimal rendering strategy level in a preset hierarchical rendering configuration tree according to the calculated load index, performing differentiated rendering by utilizing the visual focusing region weight, and further completing self-adaptive generation and rendering of the virtual scene.

Description

Lightweight virtual scene self-adaptive generation and rendering method for broadcast television production Technical Field The invention relates to the technical fields of broadcast television production, computer graphics and virtual studio, in particular to a lightweight virtual scene self-adaptive generation and rendering method for broadcast television production. Background With the popularization of digital media technology, virtual studios have become standard configurations for the production of various broadcast television programs. The current virtual production process is generally based on UnrealEngine (illusive engine) or a professional three-dimensional engine such as Unity, and the program is output by synthesizing the camera signal and the three-dimensional scene in real time. Although the prior art is mature, in the practical application of small and medium-sized broadcast and television institutions, the following significant technical problems are still faced: First, scene construction relies on expertise and has poor reusability. Existing virtual scene fabrication typically requires specialized three-dimensional artwork to perform modeling, material editing, and light placement from scratch. For daily high frequency news or interview programs, once the main tone, background large screen position or decoration style of the scene needs to be modified, the bottom layer assets often need to be readjusted and packaged again, and a light tool capable of quickly reorganizing the scene through simple parameter configuration (such as choosing and sliding block adjustment) is lacked, so that the manufacturing efficiency is low. Secondly, rendering configuration is static and cannot adapt to fluctuating loads. Mainstream rendering engines typically employ a fixed image quality profile (e.g., a preset "high/medium/low" image quality) that cannot be altered during operation. However, in actual live broadcast, the hardware resource occupation of the computer is dynamically changed (e.g., suddenly turning on the screen recording software or multi-way push). Because of the lack of dynamic load sensing mechanism, when the load of the middle-low end display card is increased, the fixed high-quality setting can lead to the sudden drop of frame rate or the blocking of pictures, resulting in broadcasting accidents, while the long-term use of the low-quality setting for safety causes the waste of hardware performance and poor picture effect. Therefore, there is a need in the industry for a lightweight technique that reduces the manufacturing threshold based on parameterized templates and automatically balances the quality and smoothness of the image according to the real-time state of the hardware. Disclosure of Invention Aiming at the problems that the scene customization modification difficulty is high, a parameterized multiplexing mechanism is lacked, a rendering pipeline is stiff, a stable frame rate is difficult to maintain on non-professional hardware and the like in the existing virtual manufacturing technology, the invention aims to provide a light-weight virtual scene self-adaptive generation and rendering technology for broadcasting and television manufacturing, and aims to quickly generate diversified virtual scenes through numerical configuration by constructing a scene template library capable of parameterizing and editing, and combine a real-time hardware performance monitoring and multistage rendering strategy dynamic switching technology, so that the content manufacturing threshold is reduced, and the system is ensured to realize the automatic balance of image quality precision and running smoothness under limited computing resources. In order to achieve the technical purpose, the application provides a lightweight virtual scene self-adaptive generation and rendering method for broadcast television production, which comprises the following steps: Carrying out topological structure loading and asset instantiation according to a scene configuration file selected or input by a user, wherein the scene configuration file comprises a scene topological structure ID, an asset index list and initial attribute parameters; Utilizing initial attribute parameters in the scene configuration file to customize the appearance of the instantiated scene; In the rendering cycle, a performance monitoring thread is started in parallel, running state data of the current computing equipment are collected in real time, and a load index is calculated; According to the calculated load index, matching the optimal rendering strategy level in a preset hierarchical rendering configuration tree, and performing differentiated rendering by utilizing the visual focusing region weight, thereby completing the self-adaptive generation and rendering of the virtual scene. Preferably, when the topology structure loading is executed, a corresponding basic scene skeleton is called from a locally preset lightweight asset library accord