CN-122020547-A - Wearable movement intention recognition method based on bidirectional cross attention
Abstract
The invention relates to the technical field of human body movement intention recognition and robot exoskeleton, and discloses a wearable movement intention recognition method based on bidirectional cross attention, which comprises the following steps of S1, multisource data synchronization, wherein a control module sends out a synchronous trigger signal to trigger at least two types of signal acquisition modules to synchronously start to acquire human body related signals; S2, data cleaning, namely preprocessing the synchronized multi-source data. By constructing a cross-mode bidirectional cross-attention feature fusion module, deep semantic association between bioelectric signals and kinematic signals is fully excavated, the forward influence of bioelectric signals on the kinematic motion state is modeled, the reverse modulation relation of kinematic feedback on bioelectric signal change is considered, the defect that the prior art only depends on a single signal source or does not fully fuse the two types of signals is overcome, and the action intention features can be extracted more comprehensively.
Inventors
- YU YANG
- DING FAN
- DONG LIN
- HUO BO
Assignees
- 首都体育学院
Dates
- Publication Date
- 20260512
- Application Date
- 20260131
Claims (10)
- 1. The wearable exercise intention recognition method based on the bidirectional cross attention is characterized by comprising the following steps of: s1, multisource data synchronization, namely sending out a synchronous trigger signal through a control module, triggering at least two types of signal acquisition modules to synchronously start to acquire related signals of a human body; s2, data cleaning, namely preprocessing the synchronized multi-source data, wherein the preprocessing comprises at least one of filtering, denoising, normalization, motion intention category labeling and window segmentation; s3, feature fusion: S31, respectively extracting characteristics of the preprocessed bioelectric signals and the preprocessed kinematic signals to obtain bioelectric signal characteristic sequences and preprocessed kinematic signal characteristic sequences; s32, respectively introducing time position codes into the bioelectric signal characteristic sequence and the kinematic signal characteristic sequence to obtain two types of characteristic sequences after position codes, wherein the time position codes are constructed through sine functions and cosine functions and are used for representing the relative position relations among different time windows; S33, constructing a bidirectional cross attention mechanism, including a forward cross attention mechanism and a reverse cross attention mechanism; S331, a forward cross attention mechanism, namely taking bioelectric signal characteristics after position coding as inquiry, taking kinematic signal characteristics as keys and values, and obtaining forward fusion characteristics through projection, attention weight calculation and weighted summation; S332, a reverse cross attention mechanism, namely taking the kinematic signal characteristics after position coding as inquiry, taking the bioelectric signal characteristics as keys and values, and obtaining reverse fusion characteristics through projection, attention weight calculation and weighted summation; S34, carrying out fusion processing on the forward fusion feature and the reverse fusion feature to form a cross-modal joint feature; s4, motion intention recognition, namely inputting the cross-modal joint characteristics to an intention recognition module and outputting a motion intention recognition result; And S5, model training, namely training a model by adopting a total loss function comprising classification loss and modal cooperative loss, wherein the total loss function is used for optimizing the cross-modal characteristic representation and the accuracy of motion intention identification, and the training process adopts a staged training strategy.
- 2. The wearable motion intention recognition method based on the bidirectional cross attention as recited in claim 1, wherein the signal acquisition module comprises a bioelectric signal acquisition unit and a kinematic signal acquisition unit, wherein each signal acquisition unit continuously acquires data according to respective fixed sampling frequency, and the data synchronization processing stage unifies all signal source data to the same sampling rate; the synchronous trigger signal is a TTL signal with fixed voltage, and the fixed voltage is 3V-5V; the bioelectric signal acquisition unit comprises a surface muscle electric signal measuring instrument, and the kinematic signal acquisition unit comprises at least one of an inertial measurement unit and a motor encoder; the same sampling rate is the highest sampling rate in the sampling rates of the signal acquisition modules, and the signal source data lower than the highest sampling rate are unified in an up-sampling mode.
- 3. The wearable motion intention recognition method based on the bidirectional cross attention as recited in claim 2, wherein the preprocessing in the step S2 comprises the steps of adopting Kalman filtering and standardization processing for data collected by an inertia measurement unit, carrying out format conversion for data collected by a motor encoder to obtain an angle signal and/or an angular velocity signal, carrying out filtering denoising for a bioelectric signal by adopting at least one of a wave trap, a low pass filter and a band pass filter, carrying out normalization processing, and completing motion intention category labeling based on relevant characteristics of human body actions.
- 4. The method for recognizing the wearable exercise intention based on the bidirectional cross attention as recited in claim 1, wherein in the step S31, features are extracted by a sliding window segmentation method, and consecutive T time windows are selected as one time sequence input sample; the dimension of the characteristics of the bioelectric signals extracted in each time window is set as The feature dimension of the kinematic signal is as follows Selecting continuous T time windows as a time sequence input sample, and representing two corresponding types of modal characteristics as follows: Wherein, the Representing a characteristic sequence of the bioelectric signal, Representing a sequence of kinematic signal features.
- 5. The method for identifying a wearable exercise intention based on bi-directional cross-attention as recited in claim 4, wherein the time-position code in step S32 is defined as: Wherein, the A time window index is represented and, , In order to focus on the hidden space dimension, Is the number of time windows; The bioelectric signal characteristic sequence after position coding is The kinematic signal characteristic sequence after position coding is 。
- 6. The wearable exercise intention recognition method based on bidirectional cross-attention as recited in claim 5, wherein the specific calculation process of the forward cross-attention mechanism in step S331 is as follows: Wherein, the , , ; Calculating the correlation of the bioelectric signal and the kinematic signal in the time dimension to obtain a cross attention weight matrix: Wherein, the The vector is queried for the bioelectric signal, In order to be a key vector of a kinematic signal, As a vector of values of the kinematic signal, As a matrix of weights, the weight matrix, As a result of the offset vector, In order for the attention to be weighted, In order to be a forward fusion feature, The spatial dimension is hidden for attention.
- 7. The wearable exercise intention recognition method based on bi-directional cross-attention as recited in claim 6, wherein the specific calculation process of the inverse cross-attention mechanism in step S332 is as follows: Wherein, the , ; Calculating the inverse cross-over attention and obtaining a fusion output: Wherein, the The vector is queried for the kinematic signal, The key vector of the bioelectric signal, The vector of values of the bioelectric signals, As a matrix of weights, the weight matrix, As a result of the offset vector, In order to reverse the fusion characteristics, The spatial dimension is hidden for attention.
- 8. The wearable exercise intention recognition method based on bi-directional cross-attention as recited in claim 1, wherein the fusion process in step S34 includes feature stitching or weighted fusion; The intention recognition module comprises at least one of a fully-connected neural network and a long-term and short-term memory network; the exercise intent includes one or more of sitting, standing, walking, ascending stairs, descending stairs, ascending slopes, descending slopes, sitting-to-standing transitions.
- 9. The method for identifying a wearable exercise intention based on bi-directional cross-attention as recited in claim 1, wherein in said step 5, said total loss function is Wherein, the As the weight coefficient of the light-emitting diode, The method is a main classification loss function, is used for directly optimizing classification accuracy, penalizes deviation of prediction probability and real labels, and has the specific form that: Wherein, the As the number of categories of exercise intents, , For the total number of samples to be taken, In order to achieve the number of samples in the transition phase, Is a category weight for solving the sample imbalance problem, Is a real tag that is not a real tag, The class probability of model prediction; The mode collaborative consistency loss function is as follows: Wherein, the And Respectively, are time steps after attention fusion Is characterized by a bioelectric signal and a kinematic signal feature vector, Representing an element-by-element multiplication operation, Is the time window length; the staged training strategy is to fix the relevant network branches of the kinematic signals, train the relevant parameters of the bioelectric signals in a cross-attention mode, defrost the relevant network branches of the kinematic signals and jointly fine tune all model parameters.
- 10. A wearable sports intention recognition system based on bidirectional cross attention, for implementing the wearable sports intention recognition method of any one of claims 1-9, characterized by comprising a wearable exoskeleton body, at least two types of signal acquisition modules, a control module and a processing module; The signal acquisition module comprises a bioelectric signal acquisition module and a kinematic signal acquisition module and is used for acquiring related signals of a human body; The control module is used for sending out synchronous trigger signals and controlling the signal acquisition modules to synchronously acquire data; the processing module is used for executing the steps of the wearable movement intention recognition method with bidirectional cross attention so as to realize movement intention recognition.
Description
Wearable movement intention recognition method based on bidirectional cross attention Technical Field The invention belongs to the technical field of human body movement intention recognition and robot exoskeleton, and particularly relates to a wearable movement intention recognition method based on bidirectional cross attention. Background In the technical field of human body movement intention recognition and robot exoskeleton, wearable exoskeleton equipment needs to accurately recognize various motion states of a human body, and provides data support for assisting rehabilitation and assisting daily activities, and one of the core technologies is effectiveness of a movement intention recognition method. Currently, related approximate technical solutions are applied to the field, for example, a solution proposed in CN113011458a "load-driven exoskeleton human motion intention recognition method and exoskeleton system". The prior proposal obtains foot GCF signals and IMU signals through an exoskeleton sensing system, firstly, according to the periodicity judgment of signal characteristics, the human body action state is divided into aperiodic activities (sitting posture, standing) and periodic activities (running, walking, going up and down stairs), for the aperiodic activities, knee joint angles of two legs are calculated by the IMU signals, the difference between the knee joint angles when sitting posture and standing is used for distinguishing, for the periodic activities, whether two feet are supported or not is judged through the foot GCF signals to distinguish running from other activities, and then, a fuzzy reasoning system is adopted, and according to the knee joint angles at the time of heel striking or toe striking in the foot GCF signals, the walking state of walking, going up stairs or going down stairs is identified. However, there are significant disadvantages to the prior art. The division of the human body action state mainly depends on the periodicity of the signals of the exoskeleton sensor system, after the periodicity division is finished, the further judgment is carried out only based on a single signal source, the characteristics of non-periodicity signals and periodicity signals in the same state mode are not fully fused, and the characteristics of the two types of signals in different action modes of a human body are not fully considered. The key reason for the disadvantage is that the guiding thought of the prior art is limited to judging the motion state according to whether different motion states of a human body have periodicity or not and judging the motion state by a single signal source after the periodicity, and the lack of fully mining and utilizing the multi-source signal source information under each motion state limits the accuracy and the comprehensiveness of motion intention recognition, especially in the situation that the intention mutation is difficult to accurately capture in the motion transition stage. Disclosure of Invention Aiming at the defects of the prior art, the invention provides a wearable movement intention recognition method based on bidirectional cross attention, which aims to solve the problems in the background art. In order to achieve the purpose, the invention provides the following technical scheme that the wearable exercise intention recognition method based on the bidirectional cross attention comprises the following steps: s1, multisource data synchronization, namely sending out a synchronous trigger signal through a control module, triggering at least two types of signal acquisition modules to synchronously start to acquire related signals of a human body; s2, data cleaning, namely preprocessing the synchronized multi-source data, wherein the preprocessing comprises at least one of filtering, denoising, normalization, motion intention category labeling and window segmentation; s3, feature fusion: S31, respectively extracting characteristics of the preprocessed bioelectric signals and the preprocessed kinematic signals to obtain bioelectric signal characteristic sequences and preprocessed kinematic signal characteristic sequences; s32, respectively introducing time position codes into the bioelectric signal characteristic sequence and the kinematic signal characteristic sequence to obtain two types of characteristic sequences after position codes, wherein the time position codes are constructed through sine functions and cosine functions and are used for representing the relative position relations among different time windows; S33, constructing a bidirectional cross attention mechanism, including a forward cross attention mechanism and a reverse cross attention mechanism; S331, a forward cross attention mechanism, namely taking bioelectric signal characteristics after position coding as inquiry, taking kinematic signal characteristics as keys and values, and obtaining forward fusion characteristics through projection, attention weight calculati