Search

CN-122020282-A - ML-ELM-AE target motion mode identification method containing class information

CN122020282ACN 122020282 ACN122020282 ACN 122020282ACN-122020282-A

Abstract

A ML-ELM-AE target movement pattern recognition method containing class information belongs to the technical field of situation cognition. The method comprises the steps of building an ELM-AE model, building an ML-ELM-AE by stacking the ELM-AE, mapping target motion characteristics by using the ML-ELM-AE, building CELM a classification model, optimizing CELM a weight of a classification model input layer to a hidden layer and a hidden layer node bias item according to a difference vector set formed by samples between classes, enabling mapping of the samples from a feature space to a class space to be regular, and improving accuracy and generalization capability of classifying the target motion modes.

Inventors

  • GUO FENGJUAN
  • LU YAO
  • HAN CHUNLEI
  • Qiao Dianfeng
  • ZHANG YANG
  • YANG DI
  • JIN ZHONGQIAN
  • ZHAO WANG

Assignees

  • 中国电子科技集团公司第二十研究所

Dates

Publication Date
20260512
Application Date
20251229

Claims (10)

  1. 1. The ML-ELM-AE target motion pattern recognition method containing class information is characterized by comprising the following specific steps: acquiring multiple time series data in a target motion process by a data acquisition module to obtain target motion time series data; Step two, performing data preprocessing on the target motion time sequence data through a data preprocessing module to obtain a normalized multi-element time sequence data set; Step three, constructing and training an ML-ELM-AE feature extraction module; Step four, performing feature extraction on the normalized multi-element time sequence data through a trained ML-ELM-AE feature extraction module to obtain a target motion abstract feature matrix; fifthly, classifying the target motion mode by using a CELM target motion mode classification optimizing module; step six, outputting the identification result through a target movement mode output module; The output result comprises semantic tags of target motion modes of the sample, corresponding category scores and recognition accuracy; If the identification accuracy is smaller than the preset threshold, returning to the step three; and if the identification accuracy is greater than or equal to a preset threshold, ending the task.
  2. 2. The method for identifying a ML-ELM-AE target motion pattern with class information according to claim 1, wherein in the first step, the data acquisition process is as follows: The data acquisition module acquires multi-element time sequence data in the process of moving a target , wherein, For the number of the plurality of time series data, Represents the first A plurality of time series variables; the multivariate time series variable The system comprises M dimensions, wherein M=7, and the 7 dimensions are longitude, latitude, altitude, speed, acceleration, doppler speed of the target and the distance between the target and an observation point; After the data acquisition is completed, the multi-element time sequence data set Transmitting to a data preprocessing module, and a multi-element time sequence set The sliding window length is set to a by dynamic updating of the sliding window.
  3. 3. The method for identifying the ML-ELM-AE target motion pattern with class information according to claim 1, wherein in the second step, the data preprocessing process is as follows: multi-element time sequence data set transmitted by data acquisition module Performing Z-score normalization processing to avoid the influence of data fluctuation on the subsequent feature extraction and classification precision; The Z-score normalization processing formula is as follows: ; In the formula, Representing normalized ; Represents the first Dimension 1 A plurality of time series variables; represents the first Maintaining all variables Is the average value of (2); represents the first Maintaining all variables Is a variance of (2); Is a minimum value; after the normalization treatment is carried out on the m-dimensional multi-element time sequence variable, Obtaining multiple time series data 。
  4. 4. A method for identifying a model of a ML-ELM-AE object motion with class information as recited in claim 3, wherein said multivariate time series data set Generated using a simulation tool, comprising 100 multivariate time series data, expressed as ; Normalized and marked to obtain training set in linear motion mode , Representing a linear motion mode; and the training sets under climbing, diving, turning and turning movement modes are respectively obtained after normalization and marking 、 、 And ; The linear motion mode Indicating movement in which both the direction of movement and the track angle remain constant, without track displacement in the horizontal or vertical direction, said climbing Representing a highly sustained increasing motion, said dive Representing a highly sustained decreasing motion, said turning Indicating movement in a horizontal or inclined plane to change the direction of flight, said turning around Indicating that the 180 degree reversal of direction movement is completed in a short period of time; training set to be normalized and marked 、 、 、 、 And transmitting to an ML-ELM-AE feature extraction module.
  5. 5. The method for identifying the ML-ELM-AE target motion mode with class information according to claim 1, wherein in the third step, the ML-ELM-AE feature extraction module is of a multi-level structure and is formed by stacking ELM-AE models layer by layer; The ML-ELM-AE feature extraction module comprises an input layer, a K layer hidden layer and an output layer, ; The input layer is used for receiving the preprocessed target motion time sequence data; Each layer of the hidden layers is provided with an activation function for extracting target motion abstract features layer by layer, the output of the upper hidden layer serves as the input of the lower hidden layer, and the adjacent hidden layers are connected with weights through a weight matrix; the output layer is the output of the K-th hidden layer and is used for outputting the abstract feature matrix after multi-layer extraction.
  6. 6. The method for identifying a model of a target motion of an ML-ELM-AE with class information as recited in claim 5, wherein the 1 st ELM-AE model of the ML-ELM-AE feature extraction module comprises The nodes of the' input layer, Each hidden layer node The 2 nd to kth ELM-AE models comprise L input layer nodes, L hidden layer nodes and L output layer nodes.
  7. 7. The method for identifying the ML-ELM-AE target motion pattern with class information according to claim 1, wherein in the third step, the training process of the ML-ELM-AE feature extraction module is as follows: step 3.1, training a first ELM-AE module: first, randomly initializing an orthogonal weight matrix of a layer 1 input layer of an ELM-AE model And orthogonal offset vector Calculating the output matrix of the hidden layer 1 by combining the hidden layer activation function Sigmoid : ; Wherein g (·) represents a Sigmoid activation function; Then, calculate the weight matrix of the layer 1 output layer : Solving optimal output weight matrix through regularized least square estimation cost function : ; Wherein, the Is a parameter for balancing experience risk and structural risk by taking For a pair of The partial derivative of (2) is zero, and the output weight matrix can be obtained : ; Finally, handle As the weight matrix of the input layer and the hidden layer of the first ELM-AE model, the training of the single-layer ELM-AE model is completed; Is that Is a transpose of (2); Step 3.2, training a second ELM-AE module: first, the output matrix of the layer 1 hidden layer As input to the 2 nd ELM-AE module, repeat step 3.1, initialize the layer 2 orthogonal bias vector Will be As the weight matrix of the 2 nd input layer and the hidden layer, calculating the output matrix of the 2 nd hidden layer And weight matrix ; Step 3.3, and so on, repeating the steps 3.1 to 3.2, and the third step Output matrix of layer hidden layer As the first Input of hidden layer ELM-AE model, initializing the first Layer orthogonal offset vector , As the first Weight matrix of layer input layer and hidden layer, calculate the first Output matrix of layer hidden layer And the abstract feature matrix is used as an abstract feature matrix and is transmitted to a CELM target motion pattern classification optimization module.
  8. 8. The method for identifying a target motion pattern of ML-ELM-AE with class information according to claim 1, wherein in the fifth step, the CELM target motion pattern classification optimization module is a CELM classifier, comprising The number of input layer nodes is one, Each hidden layer node A plurality of output layer nodes; The CELM classifier includes an input weight matrix Bias vector 。
  9. 9. The method for identifying a model of ML-ELM-AE object motion with class information according to claim 8, wherein the weight matrix Bias vector The calculation process of (2) is as follows: Step 5.1, calculating the weight vector of the hidden layer node Bias and method of making same ; Step 5.1.1, calculating the weight vector of the first hidden layer node Bias and method of making same ; Selecting a first sample from any two motion pattern classes; Calculating the features of the first sample in the two motion mode categories respectively through the ML-ELM-AE feature extraction model And And calculating the difference characteristics ; The difference features are normalized and the corresponding bias is calculated, and then: ; ; Step 5.1.2, repeating step 5.1.1, selecting the first motion pattern from any two motion pattern classes Samples 2 to 2 Weight vector of each hidden layer node Bias and method of making same , Step 5.2, the weight vector of the hidden layer node obtained according to step 5.1 Bias and method of making same Constructing CELM input weight matrix of classifier Bias vector 。
  10. 10. The method for identifying the ML-ELM-AE target motion pattern with class information according to claim 1, wherein in the fifth step, the process of classifying the target motion pattern of the target motion abstract feature matrix is as follows: step 5-1, calculating CELM an output matrix H of the hidden layer of the classifier: Will abstract the feature matrix Inputting the data to CELM classifier, and calculating CELM classifier hidden layer output matrix H by combining weight matrix W and bias vector B: ; step 5-2, evaluating a training error through an objective function, wherein the minimum solution of the objective function is the optimal solution; the objective function Q is as follows: ; Solving the weight beta connecting the hidden layer and the output layer by a method of minimizing the approximate square error to obtain beta: ; Wherein, the Is that Is a mole-penrose generalized inverse matrix, T represents a motion mode tag matrix; step 5-3, motion mode judgment: Weight matrix of output layer according to CELM classifier Calculating the class score of the target motion mode corresponding to each sample, wherein the range of the score is [0,1], and the semantic label with the highest class score is the target motion mode corresponding to the sample, so as to complete the reasoning of the target motion mode.

Description

ML-ELM-AE target motion mode identification method containing class information Technical Field The invention belongs to the technical field of situation cognition, and particularly relates to a ML-ELM-AE target movement pattern recognition method containing class information. Background Target movement pattern recognition is one of core technologies in the situation cognition field, and is generally based on target movement characteristic data (such as longitude, latitude, altitude, speed, acceleration and other kinematic parameters) acquired by a sensor, and three links of characteristic extraction, model training and pattern matching are adopted to realize automatic classification of target movement patterns, so that the method is widely applied to the fields of intelligent transportation, security monitoring and the like. With the development of deep learning technology in the field of target movement pattern recognition, a data driving type target movement pattern recognition method based on a target track sequence gradually becomes the mainstream, and the method mainly adopts a Convolutional Neural Network (CNN), a cyclic neural network (RNN), a long and short time memory network (LSTM) and other models, and realizes the on-line division of the target movement pattern by mining potential features in target time sequence data and combining a classification idea. However, the method adopts a gradient descent method to perform model training, the optimal weight is required to be obtained by repeatedly and iteratively adjusting parameters, the problems of low training efficiency and long iteration period exist, and the phenomenon of gradient explosion or gradient disappearance is easy to occur, so that the model generalization performance is poor, and the target motion pattern recognition requirement under a complex scene is difficult to meet. In recent years, the proposal of an Extreme Learning Machine (ELM) and an extreme learning machine self-encoder (ELM-AE) solves the problem of training efficiency of a traditional deep learning model, and as a single hidden layer neural network, the method can randomly generate an input-hidden layer weight and a neuron bias item in the training process, and can quickly obtain an optimal output weight by reasonably setting the hidden layer node number, so that the method has higher learning efficiency and better generalization performance compared with models such as CNN, RNN and the like, and is gradually applied to the fields of target feature extraction and target motion pattern recognition. However, the models of the traditional ELM and ELM-AE still have obvious technical short plates, mainly show that the implicit layer weight and bias of the ELM are randomly generated, and category information is not utilized, so that the method is difficult to adapt to complex and changeable target motion pattern recognition scenes. The existing technical scheme closest to the method is that a target maneuver classification method with a memory learning mechanism is mentioned in a doctor paper under the drive of data and knowledge, when behavior semantic perception of track segments is carried out by 'motion semantic memory learning', semantic discrimination problems are converted into track classification problems by utilizing the motion information of targets in order to further extract semantic information of track segments divided according to motion modes, namely, the motion position and speed information of the targets contained in the track segments are firstly utilized, then the semantic categories (namely, labels of maneuver on the track segments) are divided according to the motion modes by a classifier, and finally, the target motion mode extraction is realized. The defects of the similar technical scheme are that only basic shallow features of target motion are extracted, layer-by-layer excavation of abstract features is not performed, and data category information is not utilized to optimize classifier parameters, so that classification accuracy and generalization capability are insufficient, and the method cannot adapt to target motion pattern recognition requirements under complex scenes. The core requirement of target motion pattern recognition is to consider recognition efficiency and recognition accuracy, the prior art (comprising a traditional deep learning model and a traditional ELM-EA model) cannot simultaneously meet the core requirement, and the technical defects and causal reasoning process are as follows: (1) The traditional deep learning model (CNN, RNN, LSTM) is trained by adopting a gradient descent method, and the parameters are required to be repeatedly and iteratively adjusted to obtain the optimal weight, so that the model training speed is low, the iteration period is long, the gradient explosion/gradient disappearance phenomenon is easy to occur, the model generalization performance is poor, and the target motion abstract characteristics cann