CN-121236821-B - Fall behavior identification method based on hypergraph depth feature fusion
Abstract
The invention belongs to the technical field of computer vision and artificial intelligence, and discloses a fall behavior identification method based on hypergraph depth feature fusion; the method comprises the following steps of S1, inputting an original feature map, S2, processing the input original feature map through an advanced feature dynamic aggregation system to generate a comprehensive feature matrix, S3, executing deep information propagation on the comprehensive feature matrix by utilizing a super information propagation and feature fusion engine to output an enhanced feature matrix, and by introducing a hypergraph technology and a multi-level feature fusion strategy, the defect of the prior art in the aspects of fall detection accuracy and instantaneity is effectively solved, and the adaptability and the overall performance of the system in a complex environment are improved.
Inventors
- WANG ZIKUN
- YANG JINGYU
Assignees
- 北京中科金财科技股份有限公司
Dates
- Publication Date
- 20260505
- Application Date
- 20250923
Claims (3)
- 1. A fall behavior identification method based on hypergraph depth feature fusion is characterized by comprising the following steps: S1, inputting an original feature map, wherein the original feature map comprises a plurality of images containing falling behaviors; S2, processing the input original feature map through the advanced feature dynamic aggregation system so as to generate a comprehensive feature matrix, wherein the step of processing the input original feature map through the advanced feature dynamic aggregation system so as to generate the comprehensive feature matrix comprises the following steps: Input original feature map and train a set of weights For evaluating the contribution degree of each channel, and according to the learned weight, the contribution degree of each channel Performing adjustment, i.e. outputting the adjusted original feature map , wherein, In which, in the process, Representing the contribution of the channels in the original feature map, For learning the weights obtained " "Means element-by-element product; Extracting features from the regulated original feature map by a series of convolution kernels with different sizes, combining the features by a weighted summation mode, extracting and outputting the combined feature map , wherein, In which, in the process, Is the weight of the kth scale convolution, A convolution operation representing a kth dimension; Inputting feature maps from different depth levels of a network Optimizing the expression of final characteristics by using a fusion strategy f, and outputting a fused characteristic diagram , wherein, In which, in the process, Is a feature map of different depth levels, Is a fusion function; Will be combined by means of weighted summation 、 And Generating a comprehensive feature matrix The expression of the comprehensive feature matrix is: In which, in the process, The final output feature map is the comprehensive feature matrix; 、 And Respectively are 、 And Weighting coefficients of respective layer outputs;' "Represents the element-by-element product of the feature map and its corresponding weights" + "represents the weighted summation operation for combining information from different feature processing modules, conv1x1 represents a 1x1 convolution operation; s3, performing deep information propagation on the comprehensive feature matrix by utilizing the super information propagation and feature fusion engine so as to output an enhanced feature matrix, wherein the step of performing deep information propagation on the comprehensive feature matrix by utilizing the super information propagation and feature fusion engine so as to output the enhanced feature matrix comprises the following steps: mapping features from different network layers onto vertices of the hypergraph structure using a hypergraph feature builder, and depicting complex relationships between the vertices by defining hyperedges; Deep information propagation on hypergraph structure by using high-order hypergraph deep propagation network, wherein adjacent matrix H and comprehensive feature matrix of hypergraph are utilized And HGDPN performing a hypergraph convolution for transferring information between vertices, thereby updating the state of each vertex; the formula of the output enhanced feature matrix is: In which, in the process, Representing the enhanced feature matrix of the output, Is an adjacency matrix of hypergraph, HGDPN (G, ) Representing the result of processing the comprehensive feature matrix using a high-order hypergraph depth propagation network;' "Multiplication of the adjacency matrix H that is hypergraph and the feature matrix generated by HGDPN;" + "represents the adjacency matrix that will be hypergraph Results of processing comprehensive feature matrix by high-order hypergraph depth propagation network after propagation And comprehensive feature matrix Performing residual connection, wherein ReLU is an activation function; S4, processing and optimizing the enhanced feature matrix through the comprehensive semantic and dynamic detection network, then further extracting features through 3x3 convolution, and finally performing output classification through a Softmax function, wherein the step of processing and optimizing the enhanced feature matrix through the comprehensive semantic and dynamic detection network comprises the following steps: Integrating features from different layers by utilizing a hypergraph convolution technology, wherein the feature matrixes output by each layer are processed through a hypergraph depth propagation network, and the feature matrixes are subjected to information transfer and update through a hypergraph structure to finally form an updated and enhanced feature matrix; S5, outputting a falling behavior detection result.
- 2. A method for identifying fall behavior based on hypergraph depth feature fusion as defined in claim 1, wherein the step of further extracting features by 3x3 convolution and finally performing output classification by Softmax function comprises the steps of: Adjusting and mapping the feature matrix, wherein the adjusted feature matrix performs further spatial feature extraction through a 3x3 convolution kernel to obtain the classification of the Softmax function output; The formula is as follows: In which, in the process, Representing a classification of Softmax function output, conv3x3 represents a convolution operation using a 3x3 convolution kernel; representing the result of processing the enhanced feature matrix using a high-order hypergraph depth propagation network; Representing the enhanced feature matrix, "+" represents the result of processing the enhanced feature matrix by the higher order hypergraph depth propagation network And enhanced feature matrix Residual connection is made and Softmax represents the probability for normalizing the convolved output to generate a probability distribution indicating the likelihood of each class.
- 3. The method for identifying falling behavior based on hypergraph depth feature fusion according to claim 1, wherein the step of outputting the falling behavior detection result comprises: After the Softmax function outputs the classified result, the classified result is output as a fall behavior detection result.
Description
Fall behavior identification method based on hypergraph depth feature fusion Technical Field The invention relates to the technical field of computer vision and artificial intelligence, in particular to a fall behavior identification method based on hypergraph depth feature fusion. Background In the field of fall detection, the prior art mainly relies on traditional machine learning methods or primary deep learning models for human behavior analysis and recognition, and these technologies generally rely on simple feature extraction methods, such as optical flow techniques, background subtraction or threshold-based motion detection, and behavior recognition using single-layer image processing techniques, however, it is the limitations of these technologies that lead to the occurrence of the following problems: 1) The falling behavior identification accuracy rate in a complex scene is not high, and misjudgment is easy to generate particularly under the condition of a multi-person environment or poor light; 2) The feature extraction and fusion method is original, and cannot effectively capture deep features and complex relations of falling behaviors, so that detection performance is limited; 3) When the existing method processes the real-time video stream, the demand on the computing resource is high, and the real-time response capability and stability of the system are affected. In view of the above, the present invention proposes a fall behavior recognition method based on hypergraph depth feature fusion to solve the above-mentioned problems. Disclosure of Invention In order to overcome the defects in the prior art and achieve the purposes, the invention provides a falling behavior identification method based on hypergraph depth feature fusion, which comprises the following steps: S1, inputting an original feature map; S2, processing the input original feature map through an advanced feature dynamic aggregation system so as to generate a comprehensive feature matrix; s3, performing deep information propagation on the comprehensive feature matrix by utilizing a super information propagation and feature fusion engine so as to output an enhanced feature matrix; S4, processing and optimizing the enhanced feature matrix through a comprehensive semantic and dynamic detection network, then further extracting features through 3x3 convolution, and finally carrying out output classification through a Softmax function; S5, outputting a falling behavior detection result. Further, the original feature map comprises a number of images containing fall behavior. Further, the step of generating a comprehensive feature matrix by processing the input original feature map through the advanced feature dynamic aggregation system includes: Input original feature map and train a set of weights For evaluating the contribution degree of each channel, and according to the learned weight, the contribution degree of each channelPerforming adjustment, i.e. outputting the adjusted original feature map, wherein,In which, in the process,Representing the contribution of the channels in the original feature map,For learning the weights obtained ""Means element-by-element product; Extracting features from the regulated original feature map by a series of convolution kernels with different sizes, combining the features by a weighted summation mode, extracting and outputting the combined feature map , wherein,In which, in the process,Is the weight of the kth scale convolution,A convolution operation representing a kth dimension; Inputting feature maps from different depth levels of a network Optimizing the expression of final characteristics by using a fusion strategy f, and outputting a fused characteristic diagram, wherein,In which, in the process,Is a feature map of different depth levels,Is a fusion function; Will be combined by means of weighted summation 、AndGenerating a comprehensive feature matrixThe expression of the comprehensive feature matrix is: In which, in the process, The final output feature map is the comprehensive feature matrix;、 And Respectively are、AndWeighting coefficients of respective layer outputs;'"Represents the element-by-element product of the feature map and its corresponding weights" + "represents the weighted summation operation for combining information from different feature processing modules and Conv1x1 represents a 1x1 convolution operation. Further, the step of performing deep information propagation on the comprehensive feature matrix by using the super information propagation and feature fusion engine, thereby outputting an enhanced feature matrix, includes: mapping features from different network layers onto vertices of the hypergraph structure using a hypergraph feature builder, and depicting complex relationships between the vertices by defining hyperedges; Deep information propagation on hypergraph structure by using high-order hypergraph deep propagation network, wherein adjacent matrix H and compreh