CN-121980200-A - Networked heritage space experience data analysis method
Abstract
The invention discloses a method for analyzing space experience data of a networked heritage, and belongs to the technical field of intersection of virtual reality application and digital protection of cultural heritage. The method comprises the steps of S1, obtaining a virtual reality scene of a heritage space, constructing a meshed heritage space model, S2, collecting physiological data, behavior data and feedback data of tourists in the meshed heritage space model to obtain multi-mode data, and S3, processing the multi-mode data to obtain an experience data analysis result. The invention provides a complete flow from data acquisition and processing to result output, supports multi-scene and multi-user parallel analysis, has good engineering applicability and expansibility, and can be widely applied to cultural heritage digital protection and experience design.
Inventors
- DAI TIANCHEN
- JIANG MENGQI
- YANG YUTING
- WU JIANGYUE
- Ye Zaiqiao
- XIE XIN
Assignees
- 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院)
Dates
- Publication Date
- 20260505
- Application Date
- 20260408
Claims (7)
- 1. The networked heritage space experience data analysis method is characterized by comprising the following steps of: s1, acquiring a virtual reality scene of a heritage space, and constructing a gridding heritage space model; S2, acquiring physiological data, behavior data and feedback data of tourists in the gridding heritage space model to obtain multi-mode data; s3, processing the multi-mode data to obtain an experience data analysis result; S1 comprises the following steps: s11, acquiring high-precision point cloud data of an entity heritage by adopting a laser scanning technology, and preprocessing, registering and dividing the high-precision point cloud data to obtain three-dimensional point cloud data for constructing a geometric structure; s12, acquiring panoramic image data of physical heritage by adopting camera equipment, and reconstructing the panoramic image data to obtain a reconstructed image; s13, obtaining the gridding heritage space model based on the geometric structure and the reconstructed image.
- 2. The method for analyzing networked heritage spatial experience data according to claim 1, wherein in S11, the method for registering includes: The method comprises the steps of inputting source point cloud and target point cloud into a registration network to obtain a group of pseudo-correspondence and confidence coefficient, carrying out weighted SVD decomposition on the pseudo-correspondence by using the confidence coefficient to obtain a group of initial rotation matrix and translation vector solution, carrying out transformation update on the source point cloud by the initial rotation matrix and translation vector solution, inputting the updated source point cloud and target point cloud into the registration network again, carrying out iteration, and completing point cloud registration after the iteration times are reached; The registration network comprises a first fusion module, a second fusion module and a matching module, wherein the first fusion module is used for extracting characteristics of source point cloud and target point cloud to obtain fusion characteristics of the source point cloud and the target point cloud; The second fusion module is used for processing the source point cloud fusion characteristics and the target point cloud fusion characteristics to obtain source point cloud local characteristics and target point cloud local characteristics; the source point cloud local feature and the target point cloud local feature are subjected to a self-attention mechanism and a cross-attention mechanism to obtain a source point cloud second fusion feature and a target point cloud second fusion feature; The matching module is used for calculating Euclidean distances for the cross attention characteristics of the source point cloud and the cross attention characteristics of the target point cloud to obtain point pair matching values for representing the cross attention characteristics of the source point cloud and the cross attention characteristics of the target point cloud, calculating neighborhood matching values through the point pair matching values of adjacent points of the source point cloud and the target point cloud, adjusting the Euclidean distances by adopting the neighborhood matching values to obtain adjusted Euclidean distances, calculating the adjusted point pair matching values through the adjusted Euclidean distances to further obtain an adjusted target point cloud based on the adjusted point pair matching values, obtaining confidence coefficient and pseudo-corresponding source point cloud and an adjusted target point cloud according to the source point cloud and the adjusted target point cloud, and carrying out weighted SVD decomposition on the pseudo-correspondence through the confidence coefficient to obtain a rotation matrix and a translation vector.
- 3. The method of claim 2, wherein the loss function of the registration network comprises: ; In the formula, Representing a loss function of the registration network; Representing the transformed source point cloud; representing the transformed source point cloud set; Represents an L2 norm; Representing a Huber function; representing local losses; Representing a space loss; 、 super-parameters representing local loss and spatial loss, respectively; representing a target point cloud ; Wherein, the ; ; In the formula, 、 Respectively representing a trusted point set in a source point cloud and a target point cloud; 、 Respectively representing trusted points in a source point cloud and a target point cloud; 、 Neighboring points respectively representing trusted points in the source point cloud and the target point cloud; 、 respectively representing a neighboring point set of trusted points in a source point cloud and a target point cloud; representing the number of source target point cloud trusted points; Representing an indication function; representing the adjusted point pair matching value; An index representing points in the set of neighboring points.
- 4. The method for analyzing networked heritage spatial experience data according to claim 1, wherein in S12, the method for reconstructing the panoramic image data comprises: ; Wherein, the ; In the formula, Representing the reconstructed image; representing an upsampling operation; Representing a bilinear interpolation function; Representing a 3 x 3 convolution operation; Representing a depth feature map; Representing panoramic image data; representing a1 x 1 convolution operation; represent the first Performing filtering model operation; represent the first The distillation model was operated.
- 5. The method for analyzing the networked heritage spatial experience data according to claim 1, wherein in the step S2, the physiological data comprise eye movement data, myoelectricity data and electroencephalogram data, the behavior data comprise real-time position coordinates, moving tracks, speeds of tourists in a meshed heritage spatial model and interaction events with the meshed heritage spatial model, and the feedback data comprise subjective evaluation values obtained through scales and qualitative interviews after experience or of key nodes.
- 6. The method for analyzing the networked heritage spatial experience data according to claim 1, wherein the method for extracting features of the physiological data comprises the following steps: Filtering the physiological data to obtain filtering characteristics: ; In the formula, Representing two-dimensional batch standardization; a two-dimensional convolution representing half the sampling rate of the convolution kernel to the physiological data; representing physiological data; And carrying out depth convolution on the filtering characteristic to obtain a depth characteristic: ; In the formula, Representing two-dimensional average pooling; Representation of Activating a function; Representing a filtering feature; and carrying out separation convolution on the depth characteristic to obtain a separation characteristic: ; In the formula, Representing a separation characteristic; Representing two-dimensional batch standardization; Representing depth features.
- 7. The method for analyzing the networked heritage spatial experience data according to claim 1, wherein the method for processing the behavior data comprises the following steps: For each guest At each time window Constructing a comprehensive feature vector : Wherein, the method comprises the steps of, Representing regional grid features; representing movement dynamics features; representing path geometry; Representing an interaction feature vector; Sequence prediction using a multi-layer LSTM network, future prediction based on the final hidden state of the LSTM Attraction index of each grid of time: In the formula (I), in the formula (II), Representing a predicted attraction probability distribution; Representing a weight matrix; represent the first Layer LSTM at time Is hidden in the first layer; representing the bias vector.
Description
Networked heritage space experience data analysis method Technical Field The invention belongs to the technical field of virtual reality application and cultural heritage digital protection intersection, and particularly relates to a networked heritage space experience data analysis method. Background With the development of Virtual Reality (VR) and digital technology, providing an immersive experience for the public by constructing a virtual heritage space has become an important means for protecting and exhibiting cultural heritage. However, the existing virtual heritage experience system focuses on scene restoration and visual presentation, lacks systematic and quantitative analysis means for the experience process of the tourist, and is difficult to deeply understand the cognition, emotion and behavior reaction of the tourist in the virtual environment. At present, the related technology mostly adopts subjective methods such as questionnaires, interviews and the like to acquire feedback, has low data acquisition efficiency and limited coverage, and is difficult to realize real-time dynamic analysis. Few studies have attempted to introduce multimodal data such as eye movement, physiological signals, etc., but have not formed a complete analysis framework, especially a comprehensive analysis method combining spatial modeling, behavior tracking, physiological response with subjective feedback. Therefore, a method for integrating multi-source data and realizing systematic analysis of networked heritage space experience is needed to improve design quality, user participation and cultural spreading effect of virtual heritage experience. Disclosure of Invention The invention aims to provide a networked heritage space experience data analysis method, which overcomes the limitation of a single data source and improves the comprehensiveness and accuracy of analysis. In order to achieve the purpose, the invention provides a networked heritage space experience data analysis method, which comprises the following steps: s1, acquiring a virtual reality scene of a heritage space, and constructing a gridding heritage space model; S2, acquiring physiological data, behavior data and feedback data of tourists in the gridding heritage space model to obtain multi-mode data; s3, processing the multi-mode data to obtain an experience data analysis result. Further preferably, S1 comprises the steps of: s11, acquiring high-precision point cloud data of an entity heritage by adopting a laser scanning technology, and preprocessing, registering and dividing the high-precision point cloud data to obtain three-dimensional point cloud data for constructing a geometric structure; s12, acquiring panoramic image data of physical heritage by adopting camera equipment, and reconstructing the panoramic image data to obtain a reconstructed image; s13, obtaining the gridding heritage space model based on the geometric structure and the reconstructed image. Further preferably, in S11, the method for performing registration includes: The method comprises the steps of inputting source point cloud and target point cloud into a registration network to obtain a group of pseudo-correspondence and confidence coefficient, carrying out weighted SVD decomposition on the pseudo-correspondence by using the confidence coefficient to obtain a group of initial rotation matrix and translation vector solution, carrying out transformation update on the source point cloud by the initial rotation matrix and translation vector solution, inputting the updated source point cloud and target point cloud into the registration network again, carrying out iteration, and completing point cloud registration after the iteration times are reached; The registration network comprises a first fusion module, a second fusion module and a matching module, wherein the first fusion module is used for extracting characteristics of source point cloud and target point cloud to obtain fusion characteristics of the source point cloud and the target point cloud; The second fusion module is used for processing the source point cloud fusion characteristics and the target point cloud fusion characteristics to obtain source point cloud local characteristics and target point cloud local characteristics; the source point cloud local feature and the target point cloud local feature are subjected to a self-attention mechanism and a cross-attention mechanism to obtain a source point cloud second fusion feature and a target point cloud second fusion feature; The matching module is used for calculating Euclidean distances for the cross attention characteristics of the source point cloud and the cross attention characteristics of the target point cloud to obtain point pair matching values for representing the cross attention characteristics of the source point cloud and the cross attention characteristics of the target point cloud, calculating neighborhood matching values through the point pair match