Search

CN-122029823-A - System and method for efficiently rendering one or more scenes to one or more users interacting in a computer simulation environment

CN122029823ACN 122029823 ACN122029823 ACN 122029823ACN-122029823-A

Abstract

A system, apparatus, and method for efficiently rendering scenes to users interacting in a computer simulation environment are disclosed. The method includes receiving, by a processing unit, visual data from a data acquisition device configured to acquire data related to entities interacting in a facility, identifying a scene to be rendered for a particular user in the computer simulation environment and the entities therein, generating one or more clusters of each of the entities identified from the scene to be rendered, assigning each cluster of the scene to one of a plurality of computing devices installed in the facility based on a device assignment model, and synchronizing each cluster of scenes received from each of the plurality of computing devices to render the scene to the user interacting in the computer simulation environment.

Inventors

  • P. K. Deb
  • A. Raj
  • S. N. SINGH

Assignees

  • 西门子股份公司

Dates

Publication Date
20260512
Application Date
20240812
Priority Date
20230814

Claims (15)

  1. 1. A computer-implemented method for efficiently rendering one or more scenes to one or more users interacting in a computer-simulated environment (102), the computer-simulated environment (102) referring to a three-dimensional (3D) representation of a physical world, the method comprising: Receiving, by a processing unit (202), visual data from one or more data acquisition devices (105) configured to acquire data related to one or more entities (418A, 418B, 418C, 420A, 420B, 420C, 422A, 422B, 422C, 512, 514, 516) interacting in a facility (400B, 500B), wherein the visual data is acquired from respective perspectives of the one or more users interacting in the computer simulation environment (102); Identifying, by the processing unit (202), one or more scenes to be rendered for a particular user in the computer simulation environment (102) and the one or more entities (418A, 418B, 418C, 420A, 420B, 420C, 422A, 422B, 422C, 512, 514, 516) therein; Generating, by the processing unit (202), one or more clusters (412, 414, 416, 524) of each of the one or more entities (418A, 418B, 418C, 420A, 420B, 420C, 422A, 422B, 422C, 512, 514, 516) identified from the one or more scenes to be rendered, wherein each cluster (412, 414, 416, 524) comprises a set of entities that are related to each other; Assigning, by the processing unit (202), each cluster (412, 414, 416, 524) in the one or more scenes to one of a plurality of computing devices installed in the facility (400B, 500B) based on a device assignment model, wherein each of the plurality of computing devices is configured to process the one or more clusters (412, 414, 416, 524) using one or more machine learning models, and Each cluster (412, 414, 416, 524) of the one or more scenes received from each of the plurality of computing devices is synchronized by the processing unit (202) to render the one or more scenes to the one or more users interacting in the computer simulation environment (102).
  2. 2. The method of claim 1, the method further comprising: Generating an animation from synchronized visual data related to a scene to be rendered to the particular user; The generated animation is rendered to the particular user in the computer simulation environment (102).
  3. 3. The method of claims 1 and 2, wherein synchronizing clusters (412, 414, 416, 524) of the one or more scenes to be rendered comprises: Determining, by the processing unit (202), a scene association for each cluster (412, 414, 416, 524) transmitted from each of the plurality of computing devices based on the timestamp of each scene; arranging each cluster (412, 414, 416, 524) in a respective scene by the processing unit (202) based on the coordinates of the one or more entities (418A, 418B, 418C, 420A, 420B, 420C, 422A, 422B, 422C, 512, 514, 516) in each cluster (412, 414, 416, 524) and the determined scene association, and Synchronizing, by the processing unit (202), the one or more scenes to be rendered based on the time stamp of each scene, wherein the one or more scenes contain one or more associated cluster arrangements.
  4. 4. The method of any of the preceding claims, wherein synchronizing each cluster (412, 414, 416, 524) of the one or more scenes received from each of the plurality of computing devices to render the one or more scenes to the one or more users interacting in the computer simulation environment (102) comprises: Generating, by the processing unit (202), metadata for each cluster (412, 414, 416, 524) of the one or more scenes processed by the computing device, wherein the metadata contains one or more parameters defining visual data related to the one or more entities (418A, 418B, 418C, 420A, 420B, 420C, 422A, 422B, 422C, 512, 514, 516) in the one or more clusters (412, 414, 416, 524), and Metadata received from each of the computing devices is synchronized by the processing unit (202) based on the arrival time and service rate of each metadata received from the computing devices.
  5. 5. The method of any of the preceding claims, wherein generating clusters (412, 414, 416, 524) for each of the one or more entities (418A, 418B, 418C, 420A, 420B, 420C, 422A, 422B, 422C, 512, 514, 516) identified in the scene to be rendered comprises: Generating, by the processing unit (202), a bounding box for a set of entities of the one or more entities (418A, 418B, 418C, 420A, 420B, 420C, 422A, 422B, 422C, 512, 514, 516), wherein the set of entities are interacting with each other; determining, by the processing unit (202), an overlap score of the generated bounding boxes (518, 522) over a plurality of frames, wherein the overlap score is determined based on a comparison of an overlap area between one or more bounding boxes (518, 522) and a union area between the one or more bounding boxes (518, 522); determining, by the processing unit (202), a frame associated with the set of entities having the highest overlap score value; generating, by the processing unit (202), a first set of association graphs for the set of entities having the highest overlap score value, and The one or more clusters (412, 414, 416, 524) are generated by the processing unit (202) based on the generated first set of associative graphs.
  6. 6. The method of any of the preceding claims 1 to 5, wherein generating clusters (412, 414, 416, 524) for each of the one or more entities (418A, 418B, 418C, 420A, 420B, 420C, 422A, 422B, 422C, 512, 514, 516) identified in the scene to be rendered comprises: Generating, by the processing unit (202), a bounding box (518, 522) for a set of entities of the one or more entities (418A, 418B, 418C, 420A, 420B, 420C, 422A, 422B, 422C, 512, 514, 516), wherein the set of entities do not interact with each other; Calculating, by the processing unit (202), a relative distance score between each of the one or more bounding boxes (518, 522) over a plurality of frames, wherein the relative distance score is a distance between the set of entities in each bounding box (518, 522); determining, by the processing unit (202), a frame associated with the set of entities having the smallest relative distance score value; generating, by the processing unit (202), a second set of association graphs for the set of entities having the smallest relative distance scores, and The one or more clusters (412, 414, 416, 524) are generated by the processing unit (202) based on the generated second set of associative graphs.
  7. 7. The method of any of the preceding claims, wherein assigning each cluster (412, 414, 416, 524) to one computing device from the plurality of computing devices based on a device assignment model comprises: Identifying, by the processing unit (202), the plurality of computing devices installed in the facility (400B, 500B); Determining, by the processing unit (202), a configuration of each identified computing device; Listing, by the processing unit (202), each of the one or more clusters (412, 414, 416, 524) to be rendered according to a rendering order over a period of time; Determining, by the processing unit (202), a complexity score for each cluster (412, 414, 416, 524) to be rendered, wherein the complexity score is a measure of the complexity of the processing of the one or more clusters (412, 414, 416, 524); determining, by the processing unit (202), a priority score for each of the clusters (412, 414, 416, 524) to be rendered, wherein the priority score is determined based on the complexity score and a priority of a rendering order, and The clusters (412, 414, 416, 524) to be rendered are assigned to the computing device with the best configuration based on the highest priority score of the clusters (412, 414, 416, 524) by the processing unit (202).
  8. 8. The method of any of the preceding claims, wherein assigning each cluster (412, 414, 416, 524) to one of the plurality of computing devices based on the device assignment model comprises: each set of clusters (412, 414, 416, 524) in the first and second association graphs is assigned to a computing device based on a proximity of a user to the computing device.
  9. 9. The method of any of the preceding claims, wherein assigning each cluster (412, 414, 416, 524) to one of the plurality of computing devices based on the device assignment model comprises: When a user begins interacting with one or more new entities in the computer simulation environment (102), determining, by the processing unit (202), the one or more new entities in the scene previously rendered in the computer simulation environment (102) for the particular user, and The one or more new entities are assigned by the processing unit (202) to computing devices assigned for scenes previously rendered for the user.
  10. 10. The method of any of the preceding claims, wherein assigning each cluster (412, 414, 416, 524) to one of a plurality of computing devices based on an assignment model comprises: Determining, by the processing unit (202), one or more entities (418A, 418B, 418C, 420A, 420B, 420C, 422A, 422B, 422C, 512, 514, 516) overlapping in respective perspectives of a plurality of users cooperating in the computer simulation environment (102), and The determined overlapping entities are assigned to a particular computing device by the processing unit (202) for processing visual data for all of the plurality of users.
  11. 11. The method of any of the preceding claims, wherein assigning each cluster (412, 414, 416, 524) to one of a plurality of computing devices based on an assignment model comprises: Detecting, by the processing unit (202), a failure of a computing device during processing of clusters (412, 414, 416, 524) allocated for processing; Determining, by the processing unit (202), a progress of processing completion of the clusters (412, 414, 416, 524) assigned thereto; The clusters (412, 414, 416, 524) assigned to it are assigned by the processing unit (202) to another computing device available in the facility (400B, 500B) capable of processing the clusters (412, 414, 416, 524).
  12. 12. An apparatus (110) for efficiently rendering one or more scenes to one or more users interacting in a computer simulation environment (102), the apparatus (110) comprising: one or more processing units (202); A memory (204) communicatively coupled with the one or more processing units (202), the memory (204) comprising modules stored in the form of machine-readable instructions executable by the one or more processing units (202), wherein the apparatus (110) is configured to perform the method steps of any of claims 1-11.
  13. 13. A system (100A) for efficiently rendering one or more scenes to one or more users interacting in a computer simulation environment (102), the system (100A) comprising: A computer simulation collaboration environment (102) that renders one or more scenes corresponding to real-world entities in a facility (400B, 500B); A plurality of computing devices communicatively coupled to the computer simulation environment (102), wherein the plurality of computing devices are installed in the facility (400B, 500B), and The apparatus (110) of claim 12, communicatively coupled to the plurality of computing devices and the computer simulation collaboration environment (102), wherein the apparatus (110) is configured to efficiently render one or more scenes to one or more users interacting in the computer simulation environment (102) in accordance with any of the preceding method claims 1-11.
  14. 14. A computer program product having computer readable instructions stored therein, which, when executed by a processing unit (202), cause the processing unit (202) to perform the method steps according to any of claims 1 to 11.
  15. 15. A computer readable medium, having stored thereon program code sections of a computer program, the program code sections being loadable into a system (100A) and/or executable in the system (100A) such that the system (100A) performs the method steps of any of claims 1 to 11 when the program code sections are executed in the system (100A).

Description

System and method for efficiently rendering one or more scenes to one or more users interacting in a computer simulation environment Technical Field The present invention relates generally to computer simulation environments, and more particularly, to a method and system for efficiently rendering one or more scenes to one or more users interacting in a computer simulation environment using a computing device that is off-centered. Background An industrial environment includes multiple machines or assets in an automated factory, or internet of things devices that interact with each other. Accordingly, an industrial environment typically includes a plurality of interconnected components that are connected in signal communication with each other either directly or across a network. One emerging concept that complements rapid industrial development is the "industry universe". The industrial meta universe is a next generation fully immersive three-dimensional collaborative space, and integrates a plurality of technical directions of digital twinning, internet of things, industrial Internet, augmented reality, virtual reality, mixed reality and the like. A metauniverse is a virtual universe that has a shared 3D virtual space where virtual assets can be owned, placed, and interacted with. It also allows different users to interact with each other in a collaborative environment. These virtual assets may be simple entities such as chairs or tables, or complex entities such as industrial machinery. To this end, a typical industrial internet of things (IIoT) solution in the meta-universe would include capturing real-world data and then rendering the same data in an industrial environment in a photo-level realism to provide an immersive experience for the user. In an industrial environment (e.g., a manufacturing plant or plant), there are many machines and corresponding standard operating procedures that an operator/worker needs to follow to achieve optimal operation of the industrial environment. In this case, meta-universe simulation may be used to arrange and train workers/operators by using photo-level realism rendering of scenes and immersing them in the virtual world to create a real-world experience. For example, a manufacturing plant hosts a digital workflow for repairing machines in a VR space. Employees log into the virtual space through the VR, meet in the shared space through the 3D avatar, and communicate to repair the machine together. However, reconstructing real world objects and their movements in real time in a virtual environment requires resource-rich information and sufficient computing power to process and make inferences. Changes in the actual environment should be visible in near real-time in the virtual environment to avoid missing critical decision windows. In the current scenario, the solution to the above problem is handled by using a high-speed network (5G/6G) and a high-configuration GPU server. It should also be appreciated that such high-speed networks and high-configuration GPU servers are resource and energy intensive, increasing the carbon footprint of such virtual environments. An efficient, seamless, and near real-time metauniverse should be able to replicate events that occur in the real world in the AR/VR space. Handling these in a single device with limited computing power (e.g., independent VR hardware available in a shop or internet of things devices) is challenging because the process of identifying, tracking, and analyzing objects is not scalable. To overcome this, a special purpose station of the computing device (GPU server) is required. Furthermore, such applications will eventually exceed the computing and storage (main memory) capacity of the device, especially as the number of entities in the scene increases. The placement of a centralized system presents single point failure and poor performance challenges, resulting in delays, lags, and inconsistencies in the animation of the environment and its objects. The continuous motion of the object is blocked and the animation starts to appear fragmented. This significantly reduces the quality of experience and is confusing to the viewer. The use of cloud computing to address this challenge is a potential solution, but it also has its own set of challenges. Furthermore, even with such most advanced solutions, delays, lags and inconsistencies can still be observed when rendering objects in the meta-universe. Furthermore, such delays and lags are even more pronounced when there are multiple participants in the scene to be rendered. It should be noted that this problem becomes more pronounced when network disruption occurs in the transmission. Another challenge arises in the case of multiple collaborators in the same virtual environment. Thus, when continuous motion of objects is impeded, the animation begins to appear fragmented, significantly reducing the quality of experience of the user in the virtual environment. Other prior