CN-122029555-A - System and method for efficiently rendering one or more scenes in a computer simulated environment
Abstract
A system, apparatus, and method for efficiently rendering one or more scenes in a computer simulated environment are disclosed. The method includes receiving visual data from one or more data collection devices configured to collect data related to one or more entities interacting in an industrial environment. The method includes identifying real world scenes from an industrial environment in real time, wherein the real world scenes are identified using one or more machine learning models based on received visual data, retrieving pre-configured animation scenes from a knowledge base based on a comparison with the identified real world scenes, rendering the pre-configured animation scenes retrieved from the knowledge base in a computer simulation environment, detecting a deviation in the real world scenes when compared with the pre-configured animation scenes rendered in the computer simulation environment, and rendering the real world scenes in the computer simulation environment in response to the detected deviation.
Inventors
- P. K. Deb
- V. SHARMA
- S. N. SINGH
Assignees
- 西门子股份公司
Dates
- Publication Date
- 20260512
- Application Date
- 20240812
- Priority Date
- 20230814
Claims (16)
- 1. A method for efficiently rendering one or more scenes in a computer-simulated environment (102), the method comprising: receiving, by a processing unit (302), visual data from one or more data acquisition devices (106), the one or more data acquisition devices (106) configured to acquire data related to one or more entities (104-1 to 104-N) interacting in an industrial environment; Identifying, by the processing unit (302), a real world scene from the industrial environment in real time, wherein the real world scene is identified from the received visual data using one or more machine learning models; Retrieving, by the processing unit (302), a preconfigured animation scene from a knowledge base (112) based on a comparison with the identified real world scenes, wherein the knowledge base (112) comprises a plurality of preconfigured animation scenes derived from a knowledge-graph repository (114), the knowledge-graph repository (114) comprising information about one or more real world scenes in the industrial environment; rendering, by the processing unit (302), the preconfigured animation scene fetched from the knowledge base (112) in the computer simulation environment (102), wherein the rendered preconfigured animation scene is synchronized with the identified real-world scene in real-time; detecting, by the processing unit (302), a deviation in the real world scene when compared to the preconfigured animated scene rendered in the computer simulated environment (102), and In response to the detected deviation, the real world scene is rendered in the computer simulation environment (102).
- 2. The method of claim 1, wherein rendering the real world scene in response to the detected deviation further comprises: Determining, by the processing unit (302), the one or more entities (104-1 to 104-N) affected by the deviation in the real world scene, and -Rendering, by the processing unit (302), a real world scene in the computer simulation environment (102) related to the determined one or more entities (104-1 to 104-N) affected by the deviation.
- 3. The method of any of claims 1 or 2, wherein rendering the real world scene in response to the detected deviation further comprises: Rendering, by the processing unit (302), the preconfigured animation scene in the computer simulation environment (102) for one or more entities that are not affected by the deviation.
- 4. The method of any of the preceding claims, wherein retrieving a preconfigured animation scene from a knowledge base (112) based on a comparison with the identified real-world scene further comprises: identifying, by the processing unit (302), activities performed in the real world scene acquired from the industrial environment; Determining, by the processing unit (302), one or more tasks to be performed to perform the identified activity; Arranging, by the processing unit (302), the one or more tasks of the identified activity in a sequence of events according to their probability of occurrence, and One or more preconfigured animation scenes are fetched from the knowledge base (112) by the processing unit (302) based on a comparison of the one or more tasks in the identified activity with the plurality of preconfigured animation scenes stored in the knowledge base (112) using one or more machine learning models.
- 5. The method of claim 4, wherein retrieving the preconfigured animation scenes from the knowledge base (112) further comprises queuing the one or more preconfigured animation scenes for rendering in the computer simulation environment (102) in synchronization with the real-world scene based on the determined arrangement of the one or more tasks in the identified activity.
- 6. The method of claim 5, queuing the preconfigured animation scenes for rendering in the computer simulation environment (102) further comprising: Each of the preconfigured animation scenes is processed in parallel for rendering in the computer simulation environment (102) based on the determined arrangement of the one or more tasks in the identified activity.
- 7. The method of claim 5, wherein detecting the deviation in the real-world scene when compared to the preconfigured animation scene further comprises: identifying, by the processing unit (302), one or more tasks in the real-world scene that are not in the determined arrangement of the one or more tasks in the identified activity, and A deviation is detected in the real world scene by the processing unit (302) based on the deviation detected in the one or more tasks in the arrangement when compared to the real world scene.
- 8. The method of any of the preceding claims, wherein the one or more entities (104-1 to 104-N) comprise at least one of one or more objects, one or more persons, or a combination thereof.
- 9. The method of any of the preceding claims, further comprising generating a knowledge-graph from the visual data, wherein the knowledge-graph is a first knowledge-graph comprising scene information related to one or more entities (104-1 to 104-N) interacting in the industrial environment, and wherein generating the knowledge-graph comprises: Detecting, by the processing unit (302), one or more entities (104-1 to 104-N) from the real world scene of the industrial environment; Determining, by the processing unit (302), a state of each of one or more detected entities (104-1 to 104-N) represented by nodes in the first knowledge-graph and an interaction of each of the one or more entities (104-1 to 104-N) represented by edges in the first knowledge-graph with each other; Generating, by the processing unit (302), a first knowledge-graph with the determined nodes and edges for the one or more detected entities, and The first knowledge-graph is stored by the processing unit (302) in a first knowledge-graph repository (114), wherein the first knowledge-graph repository (114) is a public repository.
- 10. The method of any of the preceding claims, further comprising generating a knowledge-graph from the visual data, wherein the knowledge-graph is a second knowledge-graph comprising scene information related to one or more people interacting in the industrial environment, and wherein generating the knowledge-graph comprises: Detecting, by the processing unit (302), one or more persons from the real world scene of the industrial environment; Determining, by the processing unit (302), a behavioral attribute of each of one or more detected persons represented by one or more nodes in the second knowledge-graph, and a relationship of the behavioral attribute of each of the one or more persons to other one or more entities represented by edges in the second knowledge-graph; Generating, by the processing unit (302), the second knowledge-graph with the determined nodes and edges for the one or more detected persons, and The second knowledge-graph is stored by the processing unit (302) in a second knowledge-graph repository (114), wherein the first knowledge-graph repository (114) is a public repository.
- 11. The method of claim 10, further comprising generating an animation scene using the first knowledge-graph and the second knowledge-graph, wherein the method of generating a preconfigured animation scene comprises: Detecting, by the processing unit (302), one or more entities in the real world scene acquired from the industrial environment using one or more machine learning models; determining, by the processing unit (302), one or more nodes in the first knowledge-graph corresponding to one or more detected entities (104-1 to 104-N), wherein one or more entities (104-1 to 104-N) are objects, people, or a combination thereof; Detecting, by the processing unit (302), one or more actions performed in the real world scene acquired from the industrial environment using one or more machine learning models; Determining, by the processing unit (302), one or more edges in the first knowledge-graph, the one or more edges corresponding to the one or more actions performed in the real-world scene involving the one or more nodes; determining, by the processing unit (302), one or more nodes and edges in the second knowledge-graph corresponding to behaviors of people detected in the real-world scene; combining, by the processing unit (302), the determined objects in one or more nodes and the determined actions in one or more edges of the first knowledge-graph and the detected behavior of the person from the second knowledge-graph to define an animated scene, and The animated scene is generated by the processing unit (302) based on a definition using metadata associated with each of one or more nodes in the first knowledge-graph and the second knowledge-graph.
- 12. The method of claim 10 or 11, wherein the first and second knowledge-maps are stored in a distributed ledger (200).
- 13. An apparatus (110) for efficiently rendering one or more scenes in a computer-simulated environment (102), the apparatus comprising: One or more processing units (302), and A memory (304) communicatively coupled to the one or more processing units (302), the memory (304) comprising a module (306) stored in the form of machine-readable instructions executable by the one or more processing units (302), wherein the module (306) is configured to perform the method steps of claims 1 to 12.
- 14. A system (100) for efficiently rendering one or more scenes in a computer simulation environment (102), the system (100) comprising: A computer simulating a collaboration environment (102) rendering one or more scenes corresponding to real world assets in an industrial environment; A distributed network communicatively coupled to the computer simulation environment (102), wherein the distributed network includes one or more nodes for storing information in a knowledge-graph, and The apparatus (110) of claim 13, communicatively coupled to the distributed network and the computer simulation collaboration environment (102), wherein the apparatus (110) is configured to efficiently render one or more scenes in the computer simulation environment (102) in accordance with any of method claims 1-12.
- 15. A computer program product having stored therein computer readable instructions which, when executed by a processing unit (302), cause the processing unit (302) to carry out the method steps according to any one of claims 1 to 12.
- 16. A computer readable medium, having stored thereon program code sections of a computer program, the program code sections being loadable into a system (100) and/or executable in the system (100) such that when the program code sections are executed in the system (100), the system (100) performs the method steps of any of claims 1 to 12.
Description
System and method for efficiently rendering one or more scenes in a computer simulated environment Technical Field The present invention relates generally to computer simulation environments and, more particularly, to a method and system for efficiently rendering one or more scenes in a computer simulation environment. Background An industrial environment includes multiple machines or assets in an automation plant, or IoT devices that interact with each other. Accordingly, an industrial environment typically includes a plurality of interconnected components in signal communication with each other either directly or through a network. An emerging concept complementary to rapid industrial development is "industry universe (metaverse)". The industrial metauniverse is a next generation fully immersive three-dimensional collaborative space integrating multiple technical directions such as digital twinning, internet of things, industrial internet, augmented reality, virtual reality, mixed reality, and the like. A metauniverse is a virtual universe with shared 3D virtual spaces in which virtual assets can be owned, placed, and interacted with. It also allows different users to interact with each other in a collaborative environment. These virtual assets can be simple entities such as chairs or tables, or can be complex entities such as industrial machines. Further prior art can be found in WO 2022,212,622 A1, which discloses a method for a digital companion, which method comprises receiving information representing human knowledge and converting the information into a computer readable form. Furthermore, the general prior art can be found in non-patent documents (1) Cole Noah: "SIEMENS AND NVIDIA to Enable Industrial Metaverse", 29 June 2022 (2022-06-29), pages 1-2, , XP093124026;(2) JAGATHEESAPERUMAL SENTHIL KUMAR ET AL: "Building Digital Twins of Cyber Physical Systems With Metaverse for Industry 5.0 and Beyond", TI PROFESSIONAL, IEEE SERVICE CENTER, LOS ALAMITOS, CA, US, vol. 24, no. 6, 1 November 2022 (2022-11-01), , pages 34-40, , XP011932506;(3)Sonnenburg Henrik ET AL: "REMEMBER YOUR DIGITAL TWINS WHEN YOU ENTER THE METAVERSE", 1 June 2023 (2023-06-11), , pages 1-9, XP093123977, and (4)WENZHENG LL "ConceptK Technology Features and System Architecture of Industrial Metaverse", 2023 IEEE 13TH INTERNATIONAL CONFERENCE ON ELECTRONICS INFORMATION AND EMERGENCY COMMUNICATION (ICEIEC), IEEE, 14 July 2023 (2023-07-14),, pages 13-16, XP034393372. For this purpose, a typical IIoT (industrial internet of things) solution in the meta-universe would involve capturing real-world data and then rendering that real-world data in an industrial environment in a photo-level realistic manner to provide an immersive experience to the user. In an industrial environment, such as for a manufacturing plant or plant floor, there are numerous machines and corresponding standard operating procedures that an operator/worker needs to follow to obtain optimal operation of the industrial environment. In such cases, workers/operators may be scheduled and trained using meta-universe simulation, using photo-level real-world renderings of scenes, and immersing them in the virtual world to create a real-world experience. For example, a manufacturing plant hosts a digital workflow in VR space for repairing machines. Employees log into the virtual space via VR, meet in the shared space via 3D avatars, and communicate, repairing the machine together. However, recreating real world objects and their movements in real time in a virtual environment requires resource-rich information and sufficient computing power to process and make inferences. Changes in the actual environment should be readily visible in the virtual environment in near real time to avoid discounting critical decision windows. In the current scenario, the above-mentioned problem solution is addressed using high-speed networking (5G/6G) and a high-profile GPU server. It should also be appreciated that such high-speed networking and high-configuration GPU servers are resource and energy intensive, and thus increase the carbon footprint of such virtual environments. Furthermore, even with such state-of-the-art solutions, hysteresis, delay and inconsistencies are observed when rendering objects in the metauniverse. Furthermore, such delays and lags are even more pronounced when there are multiple participants in the scene to be rendered. It should be noted that this problem becomes more pronounced when there is a network disruption in the transmission. Another challenge arises when there are several collaborators in the same virtual environment. Thus, when continued movement of objects is impeded, the animation begins to appear to break away, significantly degrading the quality of experience of the user in the virtual environment. In view of the above, there is a need to provide a system and method for efficiently rendering one or more scenes in a computer-simulated environment in an ene