CN-121997767-A - Variable-controllable immersive evacuation illumination reaction data acquisition and modeling experiment method
Abstract
The invention provides a variable controllable immersive evacuation illumination response data acquisition and modeling experiment method, which comprises the steps of constructing immersive evacuation scenes, adjusting illumination variables to form different illumination guide schemes, synchronously acquiring eye movement physiology, behavior track and regional attention data of a subject, repeating experiments according to preset designs, accumulating multi-scene multi-sample data, processing and extracting key features of original data, constructing a multi-dimensional response feature vector, mapping the feature vector to a high-dimensional space, inputting the feature vector into a trained machine learning model, classifying and predicting evacuation response modes, constructing a simulation environment, defining a state, action and rewarding function of reinforcement learning, adopting deep reinforcement learning training agents, simulating evacuation behaviors and iteratively optimizing illumination designs. The method realizes complete technical closed loop of multi-dimensional real response data acquisition, intelligent analysis and simulation verification under controllable conditions, and provides scientific and quantitative basis for optimizing an evacuation lighting system.
Inventors
- ZHENG CE
- CHANG YU
- ZHANG YUYANG
- Du Zhuoyuan
- ZHANG LITAO
Assignees
- 天津商业大学
Dates
- Publication Date
- 20260508
- Application Date
- 20260210
Claims (9)
- 1. The variable controllable immersive evacuation illumination reaction data acquisition and modeling experiment method is characterized by comprising the following steps of: S1, constructing an immersive evacuation scene corresponding to a large public building, deploying a programmable evacuation lighting device in the scene, and adjusting lighting variables through a control system to form different lighting guide schemes; s2, synchronously acquiring eye movement physiological data, behavior track data and regional attention data of a subject by adopting sensing and tracking equipment, and synchronously integrating all the data through uniform time stamping; S3, placing the subject in the immersed scene to simulate a real evacuation process, adjusting illumination variables according to a preset experimental design, and repeating the experiment to accumulate sample data of multiple scenes and multiple subjects; S4, processing the collected original data, extracting eye movement key indexes, behavior track key features and regional attention features, and carrying out standardization or normalization processing on the extracted features; S5, combining the extracted multiple features to construct a multidimensional response feature vector, and mapping response data of different subjects and different moments to the same high-dimensional feature space; S6, inputting the multi-dimensional response feature vector into a trained machine learning method model to classify and predict the evacuation response mode of the subject; S7, constructing a simulation environment, defining a state space, an action space and a reward function of reinforcement learning based on the immersive evacuation scene and illumination control, training an agent by adopting a deep reinforcement learning algorithm, simulating evacuation behaviors and iteratively optimizing illumination design.
- 2. A variable controllable immersive evacuation lighting reaction data collection and modeling experiment method according to claim 1, wherein in step S1, the immersive evacuation scene is constructed by using virtual reality, augmented reality or mixed reality technology, and virtual emergency environment elements can be loaded to create an emergency atmosphere.
- 3. A variable controlled immersive evacuation illumination reaction data collection and modeling experiment method according to claim 1, wherein in step S1, the illumination variable comprises at least one of brightness, flicker frequency, color, and on/off pattern.
- 4. A variable controllable immersive evacuation illumination response data collection and modeling experiment method according to claim 1, wherein in step S2, the eye movement physiological data includes at least one of gaze point position, gaze duration, number of jumps, jump amplitude and pupil diameter variation; The behavioral trajectory data includes at least one of a walking path, gait characteristics, and head orientation.
- 5. A variable controlled immersive evacuation illumination reaction data collection and modeling experiment method according to claim 1, wherein in step S2, the regional interest data is obtained by demarcating a region of interest in a scene, the region of interest including at least one of a safe exit position, an evacuation indicator and an obstacle region, and the regional interest data is collected as a residence time and a residence frequency of a subject in each region of interest.
- 6. The variable-controllable immersive evacuation illumination response data acquisition and modeling experiment method according to claim 1, wherein in the step S4, the eye movement key index comprises total eye-watching times, average single-watching duration, total eye-jumping times, average eye-jumping amplitude, maximum change amount of pupil diameter and blink frequency, the behavior track key feature comprises total walking distance, average travelling speed, obstacle avoidance times, pause times, head orientation change amplitude and step length variation coefficient, and the region attention feature comprises watching times, watching stay time and stay time duty ratio of each attention region.
- 7. The variable-controllable immersive evacuation illumination response data collection and modeling experiment method according to claim 1, wherein in step S5, the dimensions of the multidimensional response feature vector include evacuation time, total duration of gaze evacuation indication marks, pupil diameter related index, walking path feature, head orientation change index and number of pauses, and a plurality of sub-vectors can be constructed or weights can be introduced according to comparison purposes to highlight importance of specific dimensions.
- 8. The variable controllable immersive evacuation lighting reaction data acquisition and modeling experiment method according to claim 1, wherein in step S6, specifically comprising: Clustering the multidimensional response feature vectors by adopting an unsupervised clustering algorithm to divide different response mode categories, and/or constructing a classification prediction model by adopting a supervised classification algorithm to identify or predict the response types of the subjects; Training and modeling the response feature vector or time sequence data by adopting a deep learning method, and predicting the evacuation behavior trend of the user.
- 9. The variable controllable immersive evacuation lighting reaction data acquisition and modeling experiment method according to claim 1, wherein in step S7, specifically comprising: Constructing an reinforcement learning simulation environment, and mapping an immersive evacuation scene and a current lighting control scheme into a reinforcement learning state space and an action space, wherein the state space comprises position information and orientation of an intelligent body and states of indicator lamps in the environment, and the action space comprises a next step of movement direction selection; Setting a reward function associated with an evacuation target, training an intelligent body based on the reward function by adopting a deep reinforcement learning algorithm, so that the intelligent body gradually learns an optimal evacuation strategy under different illumination schemes; and simulating evacuation behavior tracks of a large number of virtual individuals under a given lighting scheme by utilizing the finally obtained agent strategy, and evaluating and iteratively optimizing the lighting scheme.
Description
Variable-controllable immersive evacuation illumination reaction data acquisition and modeling experiment method Technical Field The invention relates to the technical field of emergency evacuation and public safety, in particular to a variable-controllable immersive evacuation illumination reaction data acquisition and modeling experiment method. Background In the case of emergency situations in large public buildings, reasonable evacuation lighting and indication signs are key to guiding people to evacuate safely and quickly. Traditional evacuation indication effect researches analyze human behavior decisions usually through personnel evacuation exercises, questionnaires or video observations, but the problems of single data dimension and limited precision exist in the methods. It is difficult to get insight into the difference of microcosmic cognitive response in evacuation by only relying on macroscopic indicators such as personnel path and evacuation time. In recent years, the development of technologies such as immersive virtual reality, augmented reality and the like provides a new approach for evacuation behavior research, so that a realistic virtual emergency scene can be constructed under safe and controllable experimental conditions. By constructing a realistic virtual emergency scene and combining the related acquisition devices, conditions are created for deep exploration of attention distribution and decision-making mechanisms of people in emergency evacuation. The technology can effectively restore the perceived environment of the real scene, simultaneously avoids the problems of uncontrollable safety risks and conditions in the traditional field experiment, and provides a more advantageous platform for the development of related researches. With the rise of intelligent technologies such as machine learning, deep learning and the like, the introduction of the intelligent technologies into evacuation behavior analysis becomes an industry development trend, and the technologies show good application potential in scenes such as behavior selection prediction, safety early warning and the like in emergency situations. However, in the field of evacuation lighting and sign design, the prior art still has obvious short plates, a scheme capable of systematically solving the problem is not formed, accurate control of key variables and effective collection of multidimensional data cannot be realized, deep excavation and intelligent analysis capability of the data are also lacking, and comprehensive and scientific technical support is difficult to provide for optimization of evacuation guidance. Disclosure of Invention The invention aims to provide a variable-controllable immersive evacuation illumination reaction data acquisition and modeling experiment method, which solves the technical problems that the prior art is difficult to acquire multidimensional real reaction data in the evacuation process under controllable conditions, and the intelligent deep excavation and analysis capability of the data is lacking, so that scientific and quantized optimization basis cannot be provided for evacuation illumination design. In order to solve the technical problems, the technical scheme of the invention is as follows: the invention provides a variable-controllable immersive evacuation illumination reaction data acquisition and modeling experiment method, which comprises the following steps: S1, constructing an immersive evacuation scene corresponding to a large public building, deploying a programmable evacuation lighting device in the scene, and adjusting lighting variables through a control system to form different lighting guide schemes; s2, synchronously acquiring eye movement physiological data, behavior track data and regional attention data of a subject by adopting sensing and tracking equipment, and synchronously integrating all the data through uniform time stamping; S3, placing the subject in the immersed scene to simulate a real evacuation process, adjusting illumination variables according to a preset experimental design, and repeating the experiment to accumulate sample data of multiple scenes and multiple subjects; S4, processing the collected original data, extracting eye movement key indexes, behavior track key features and regional attention features, and carrying out standardization or normalization processing on the extracted features; S5, combining the extracted multiple features to construct a multidimensional response feature vector, and mapping response data of different subjects and different moments to the same high-dimensional feature space; S6, inputting the multi-dimensional response feature vector into a trained machine learning method model to classify and predict the evacuation response mode of the subject; S7, constructing a simulation environment, defining a state space, an action space and a reward function of reinforcement learning based on the immersive evacuation scene and i