Search

CN-121999661-A - Real-scene interactive intelligent experience system and evaluation method for safety training

CN121999661ACN 121999661 ACN121999661 ACN 121999661ACN-121999661-A

Abstract

The application discloses a live-action interactive intelligent experience system and an evaluation method for safety training, wherein the system comprises a live-action digital twin module, a digital twin base, an intelligent interactive hardware layer, a central processing and logic engine and a management monitoring end, wherein the live-action digital twin module is used for constructing a refined three-dimensional model consistent with a live scene through multi-source data fusion, the model integrates geometric information, physical attributes, business rules and real-time state data to form a training environment, and the management monitoring end is used for real-time monitoring, data visualization and manual intervention in a training process. According to the application, a refined three-dimensional model consistent with a real scene is constructed through multi-source data fusion, and physical attributes and business rules are injected to form a digital twin base with dynamic response, so that a trainee faces to a virtual disaster condition in the real environment, and the immersion and actual combat of training are improved.

Inventors

  • YI FULONG
  • ZHANG ZHONGHUA
  • ZHOU JIAN
  • JIANG YOU
  • LI YU
  • SHAO BO
  • SONG SHENGJIE

Assignees

  • 华能(大连)热电有限责任公司

Dates

Publication Date
20260508
Application Date
20260206

Claims (8)

  1. 1. A real-scene interactive intelligent experience system facing safety training is characterized by comprising: The live-action digital twin module is used for constructing a refined three-dimensional model consistent with a real scene through multi-source data fusion, and the model integrates geometric information, physical properties, business rules and real-time state data to form a digital twin base of a training environment; The intelligent interaction hardware layer comprises AR intelligent glasses/helmets, which are worn by trainees and used for superposing virtual disaster scenes, equipment state information, operation guidance and danger warning generated by the AR scene rendering module in a real view; the multimode sensor array is distributed on the live environment and the trainee and comprises an environment sensor, a behavior capturing sensor and a physiological sensor; The physical interaction prop comprises physical props adopted by key equipment and is used for capturing actual operation behaviors of trainees; the central processing and logic engine comprises: the AR scene rendering module drives the evolution of the virtual disaster in real time through a physical engine based on the live digital twin model, and seamlessly fuses the virtual scene with the real environment; The multi-mode data fusion analysis module is used for receiving and processing data from the intelligent interaction hardware layer, analyzing voice instructions through a natural language processing technology, analyzing operation actions through a behavior understanding engine and evaluating stress states through a physiological signal analyzer; the training evaluation and self-adaptive decision module evaluates the trainee in real time based on a multidimensional performance evaluation index system, calculates the comprehensive performance score by adopting a two-level dynamic weight distribution algorithm, and dynamically adjusts the training path according to the evaluation result; and the management monitoring end is used for real-time monitoring, data visualization and manual intervention in the training process.
  2. 2. The real-scene interactive intelligent experience system facing safety training of claim 1, wherein the real-scene digital twin module comprises four layers, namely a first layer is a geometric scene high-precision reconstruction layer, a second layer is a semantic and functional logic enhancement layer, a third layer is a dynamic data driving and rule injection layer, and a fourth layer is a real-time data interface and synchronization layer; Eventually outputting a digital twin body that remains synchronized and interactive with the real world.
  3. 3. The security training-oriented live-action interactive intelligent experience system according to claim 1, wherein the workflow of the live-action digital twin module comprises the steps of: Data acquisition, namely, cooperatively utilizing an air-ground scanning technology to finish the acquisition of geometric and texture data of the whole target area; Data fusion and modeling, namely carrying out accurate registration and fusion on the point cloud, the oblique photography model and the BIM model under a unified coordinate system to generate a lightweight but high-fidelity three-dimensional model; Semanteme and logic, through AI identification or manual labeling, inject semantic information for the model; And publishing and driving, namely publishing the finally generated digital twin body to a central processing and logic engine for the AR scene rendering module to call, receiving data from the interactive hardware layer in real time, and driving the state change of the twin body.
  4. 4. The security training oriented live-action interactive intelligent experience system of claim 1, wherein the workflow of the AR scene rendering module comprises the steps of: The method comprises the steps of driving a scene, and loading the scene with semantic information and physical properties from a live-action digital twin module by a module; event response, receiving an instruction from a logic engine, and calling a prefabricated disaster special effect and a physical simulation script; real-time rendering and pushing, real-time calculating and generating an AR image stream according to the visual angle and the position of the trainee, and pushing the AR image stream to the AR glasses through a low-delay network.
  5. 5. The real-scene interactive intelligent experience system facing safety training according to claim 1, wherein the multi-mode data fusion analysis module is characterized in that a core is a unified space-time frame, and time stamps and space labels are printed on all accessed sensor data; the multimodal data fusion analysis module is deployed with a behavioral understanding engine, a natural language processing engine, and a physiological signal analyzer.
  6. 6. The safety training-oriented live-action interactive intelligent experience system as claimed in claim 1, wherein the system comprises a multidimensional performance assessment index system, wherein the system at least comprises operation normalization, decision correctness, knowledge key hit rate, team cooperation efficiency and psychological stability; the training evaluation and self-adaptive decision module adopts a two-stage dynamic weight distribution algorithm, wherein the first stage distributes basic weights for each evaluation dimension according to the training subject type, and the second stage dynamically adjusts the weights according to the real-time performance of the trainee.
  7. 7. The security training-oriented live-action interactive intelligent experience system according to claim 1, wherein the specific workflow of the adaptive training path planning is: dynamically selecting the next training unit from the training content resource library according to the calculated comprehensive performance score in real time and the short plates in each dimension; and (3) self-adapting decision, dynamically inserting a decision-making reinforced training sub-scene, and forcing the trainee to perform continuous decision-making exercises for a plurality of times until the decision accuracy is improved to a set threshold value.
  8. 8. The assessment method of the live-action interactive intelligent experience system facing the safety training is characterized by comprising the following steps of: The method comprises the steps of initializing a scene and distributing roles, loading a real scene digital twin scene of a specific safety training subject by a system, distributing roles for students participating in training, and setting training initial difficulty; virtual-real interaction and data acquisition, namely, starting AR scene rendering, generating virtual disaster in a real environment, and synchronously acquiring the spatial positions, operation actions, voice instructions, physiological data and inter-team communication data of all trainees in real time through a multi-mode sensor array and a real object interaction prop; Step three, multi-dimensional performance index real-time calculation is carried out, collected data streams are cleaned and feature extraction is carried out, and scores of all dimensions are calculated in parallel based on a multi-dimensional performance evaluation index system; Step four, dynamic weight distribution and comprehensive grading, namely calling a preset first-level weight vector according to the type of the current training subjects, combining real-time performance data of trainees, and calculating the dynamic weight of each dimension at the current moment by using a two-level dynamic weight distribution algorithm; Weighting and fusing the dimension scores and the dynamic weights to generate real-time comprehensive performance scores of trainees; step five, self-adaptive feedback and path adjustment, wherein the system provides instant visual, auditory and tactile feedback for a trainee through AR equipment according to a real-time evaluation result; And step six, generating a comprehensive evaluation report, wherein after the training is finished, the system automatically generates a detailed evaluation report, and the content comprises a dimension score radar chart and a time curve, key decision point analysis, advantage and summary of items to be improved and personalized training advice based on data.

Description

Real-scene interactive intelligent experience system and evaluation method for safety training Technical Field The application belongs to the technical field of safety training, and particularly relates to a live-action interactive intelligent experience system and an evaluation method for safety training. Background With the continuous development of technologies in the fields of industrial safety production, emergency management and the like, the practical effect and immersion requirements of enterprises on staff safety training are increasingly improved. The traditional security training adopts modes such as theoretical teaching, video teaching or static simulation exercise, and the like, and has the following outstanding problems: the reality of the training scene is insufficient, namely the complex risk and changeable situation in the real working environment are difficult to restore in the traditional mode, the trainee cannot obtain the actual combat experience in the scene, and the training content is disjointed with the actual situation; The interactivity and immersion are weak, even if a Virtual Reality (VR) technology is used, the virtual reality is limited to a completely virtual environment, the virtual reality cannot be combined with a real physical scene, a trainee cannot operate in an actual workplace, and the conversion effect of training is limited; the subjective single evaluation means is that the existing evaluation is dependent on manual observation or post-trial writing, lacks objective and quantitative data support, and cannot capture the operation normalization, decision logic, psychological state and other multidimensional expressions of a trainee in real time; therefore, there is a need in the art for a real-scene interactive security training system that can integrate real scenes with virtual elements, support real-time data acquisition and intelligent evaluation, and have adaptive training capabilities, so as to improve pertinence, effectiveness, and scientificity of training. Disclosure of Invention The application provides a live-action interactive intelligent experience system and an evaluation method for safety training, and aims to solve the problems of insufficient reality of training scenes, weak interactivity and immersion and single subjective evaluation means in the prior art. In a first aspect, a security training oriented live-action interactive intelligent experience system includes: The live-action digital twin module is used for constructing a refined three-dimensional model consistent with a real scene through multi-source data fusion, and the model integrates geometric information, physical properties, business rules and real-time state data to form a digital twin base of a training environment; The intelligent interaction hardware layer comprises AR intelligent glasses/helmets, which are worn by trainees and used for superposing virtual disaster scenes, equipment state information, operation guidance and danger warning generated by the AR scene rendering module in a real view; the multimode sensor array is distributed on the live environment and the trainee and comprises an environment sensor, a behavior capturing sensor and a physiological sensor; The physical interaction prop comprises physical props adopted by key equipment and is used for capturing actual operation behaviors of trainees; the central processing and logic engine comprises: the AR scene rendering module drives the evolution of the virtual disaster in real time through a physical engine based on the live digital twin model, and seamlessly fuses the virtual scene with the real environment; The multi-mode data fusion analysis module is used for receiving and processing data from the intelligent interaction hardware layer, analyzing voice instructions through a natural language processing technology, analyzing operation actions through a behavior understanding engine and evaluating stress states through a physiological signal analyzer; the training evaluation and self-adaptive decision module evaluates the trainee in real time based on a multidimensional performance evaluation index system, calculates the comprehensive performance score by adopting a two-level dynamic weight distribution algorithm, and dynamically adjusts the training path according to the evaluation result; and the management monitoring end is used for real-time monitoring, data visualization and manual intervention in the training process. Optionally, the live-action digital twin module comprises four layers, namely a first layer, a second layer, a third layer, a fourth layer and a third layer, wherein the first layer is a geometric scene high-precision reconstruction layer, the second layer is a semantic and functional logic enhancement layer, the third layer is a dynamic data driving and rule injection layer, and the fourth layer is a real-time data interface and synchronization layer; Eventually outputting a digital twin body