Search

CN-121997166-A - Brain electrical signal motor imagery decoding method based on two-way second-order feature fusion

CN121997166ACN 121997166 ACN121997166 ACN 121997166ACN-121997166-A

Abstract

Aiming at the challenges of low signal-to-noise ratio, non-stationarity, cross-main body distribution difference and the like of a motor imagery electroencephalogram, the method enhances the robustness through data preprocessing, extracts multi-scale space-time characteristics by utilizing a multi-branch graph convolution network, and respectively constructs global and local second-order characteristic structures. Global second-order features capture global rhythmic patterns through position coding and long-range timing modeling, and local second-order features capture instantaneous dynamic changes through sliding window slicing and aggregation operations. And finally, adopting a gating fusion strategy to adaptively integrate two-way characteristics in the logarithmic Euclidean space, and realizing motor imagery category decoding through a classifier. The method and the device are used for remarkably improving the decoding stability and generalization capability by combining multi-scale space-time feature extraction and two-way second-order structural modeling, and are suitable for scenes such as real-time brain-computer interface control, nerve rehabilitation and intelligent auxiliary equipment.

Inventors

  • ZHOU QIANWEI
  • CHEN ZHITAO
  • WU SHIQIANG
  • HU HAIGEN

Assignees

  • 浙江工业大学

Dates

Publication Date
20260508
Application Date
20251231

Claims (9)

  1. 1. An electroencephalogram signal motion image decoding method based on two-way second-order feature fusion is characterized by comprising the following steps of: (1) Performing random clipping and random time period zero setting operation on an original electroencephalogram signal to enhance robustness of a model to local signal loss, and then performing band-pass filtering and channel normalization to form a training sample; (2) Inputting the preprocessed electroencephalogram sample into a multi-branch graph convolution network, extracting multi-scale space-time embedded features through convolution kernels of different time scales, and fusing to form unified space-time feature representation; (3) Adding a leachable position code to the space-time embedded feature, carrying out long-range time sequence modeling, and obtaining a global second-order feature through covariance estimation and logarithmic Euclidean mapping to describe a whole rhythm structure; (4) Slicing the space-time embedded features according to sliding windows, calculating covariance matrixes for each local window, performing logarithmic mapping, and forming local second-order features through an aggregation module to capture short-period rhythm changes; (5) And (3) second-order feature fusion and classification decoding, namely integrating global and local second-order features in a logarithmic Euclidean space by adopting a gating fusion strategy, and outputting a motor imagery category through a full-connection layer and a Softmax classifier.
  2. 2. The brain electrical signal motion image decoding method based on the two-way second-order feature fusion as claimed in claim 1, wherein the specific operation process in the step (1) is as follows: S101, randomly cutting out fixed-length fragments from continuous electroencephalogram signals; S102, setting zero in a random time period, namely randomly selecting a starting point and a length from the cut signal to set the data to zero; S103, bandpass filtering, namely filtering irrelevant frequency bands; S104, channel normalization, namely calculating a mean value and a standard deviation according to a time dimension, and performing Z-score normalization.
  3. 3. The brain electrical signal motion image decoding method based on the two-way second-order feature fusion as claimed in claim 1, wherein the specific operation process of the step (2) is as follows: S201, convolution of a shared graph, namely extracting space related features by using a Chebyshev polynomial graph; s202, multi-scale time convolution, namely adopting convolution kernels with different lengths to extract short-time, medium-time and long-time sequence modes in parallel; S203, feature fusion, namely carrying out batch normalization and activation function processing on multi-branch output, and splicing along feature dimensions.
  4. 4. The brain electrical signal motion image decoding method based on the two-way second-order feature fusion as claimed in claim 1, wherein the specific operation process of the step (3) is as follows: S301, position sensing coding, namely adding a leachable position embedding for the space-time embedding characteristics; s302, modeling a long-range time sequence, namely covering the time sequence dependence of the whole sample window by using a large convolution kernel; s303, covariance stabilization, namely introducing a shrinkage term to enhance the robustness of a covariance matrix; s304, logarithmic Euclidean mapping, wherein the covariance matrix is mapped to a symmetric matrix space.
  5. 5. The brain electrical signal motion image decoding method based on the two-way second-order feature fusion as claimed in claim 1, wherein the specific operation process of the step (4) is as follows: s401, sliding window slicing, namely generating a local segment along the time dimension according to a preset window length and a preset step length; S402, calculating a local covariance matrix for each window and performing stabilization treatment; S403, logarithmic mapping, namely converting the local covariance matrix into a symmetric matrix; S404, feature aggregation, namely aggregating a plurality of local representations through a weighted average or attention mechanism.
  6. 6. The brain signal motion image decoding method based on two-way second-order feature fusion as claimed in claim 1, wherein the second-order feature fusion in the step (5) adopts a gating fusion strategy, and the fusion proportion of global and local second-order features is adaptively adjusted through a learnable coefficient beta, and the fusion formula is as follows: , Where β may be a scalar or vector, G2 is a global second order feature, and L2 is a local second order feature.
  7. 7. The method for decoding brain signal motion image based on two-way second-order feature fusion according to claim 1, wherein the classification decoding in the step (5) comprises expanding the fused symmetric matrix into feature vectors, mapping through a full-connection layer, and outputting class probabilities through Softmax.
  8. 8. The brain signal motion image decoding method based on the two-way second-order feature fusion according to claim 1, wherein the time convolution kernel length in the multi-branch chart convolution network can be adaptively adjusted according to the brain signal sampling rate.
  9. 9. The method for decoding an electroencephalogram motion image based on two-way second-order feature fusion according to claim 4 or 5, wherein the contraction parameter λ in the covariance stabilization process has a value ranging from 0 to 1, and is used for controlling the noise suppression degree.

Description

Brain electrical signal motor imagery decoding method based on two-way second-order feature fusion Technical Field The invention belongs to the technical field of brain-computer interface of brain-computer signal decoding and motion, and particularly relates to an electroencephalogram signal motor imagery decoding method based on double-path second-order feature fusion. Background The Brain-computer interface (Brain-computer interface-Computer Interface, BCI) realizes direct interaction between a person and external equipment by analyzing Brain electrical activity, and has important application value in the fields of nerve rehabilitation, intelligent auxiliary control, man-machine interaction and the like. As a mainstream non-invasive signal acquisition mode, electroencephalogram (EEG) has the advantages of low cost, high time resolution, convenient acquisition and the like, and is particularly suitable for real-time monitoring and continuous interaction scenes, so that the EEG plays a key role in medical rehabilitation, intelligent control and wearable equipment for a long time. In many EEG paradigms, motor imagery (Motor Imagery, MI) is of great interest because it induces stable cortical rhythm changes without external stimuli and actual motion. MI activates the mu and beta rhythms of sensory and motor cortex through simulating limb actions, presents a typical ERD/ERS mode, has high consistence with real movements, and has good physiological interpretability and user operability, so that the MI is widely applied to the scenes of rehabilitation training, artificial limb and exoskeleton control, brain control wheelchair, virtual interaction and the like. In this context, accurate decoding of MI-EEG signals is critical to the push of BCI technology to land. The decoding performance directly influences the recognition accuracy of brain control commands and the response quality of the system, and is important to the stability of rehabilitation feedback and intelligent control systems. However, the MI-EEG signal has the characteristics of low signal-to-noise ratio, obvious non-stationarity, cross-subject difference and the like, is often interfered by myoelectricity, eye movement artifacts and the like, the cross-session characteristic distribution of the same tested sample also can drift along with state change, and obvious distribution deviation is generated between different tested samples due to physiological structure difference. These factors make it difficult for conventional models that rely on fixed spatio-temporal structures or single statistical features to maintain stable performance in real scenarios, limiting the generalization ability and reliability of MI decoding. Therefore, a technical scheme aiming at the aspects of noise interference, cross-session drift, cross-main body distribution difference and the like of motor imagery electroencephalogram signals is urgently needed. Disclosure of Invention Aiming at the problems, the invention aims to provide an electroencephalogram signal motor imagery decoding method based on double-path second-order feature fusion. The method comprises the steps of (1) constructing a training frame, preprocessing data of an original electroencephalogram, randomly cutting and randomly selecting a starting point to set a subsequent data segment to zero so as to enhance robustness of the model to local signal deficiency, (2) extracting multi-scale space-time features of electroencephalogram data based on a multi-branch graph convolution network to obtain space-time embedded representation for high-order statistical modeling, (3) constructing a global second-order feature branch, carrying out position coding and time sequence modeling on the space-time embedded feature, obtaining a global second-order structure through covariance estimation and logarithmic Euclidean mapping, (4) constructing a local second-order feature branch, slicing the feature according to a sliding window, respectively calculating covariance and logarithmic mapping, forming a local second-order structure through an aggregation module, and (5) carrying out gate control fusion on the global second-order feature and the local second-order feature in the logarithmic Euclidean space, and outputting a motion category through a classifier. The specific technical scheme is as follows: an electroencephalogram signal motor imagery decoding method based on two-way second-order feature fusion comprises the following steps: (1) Performing random clipping and random time period zero setting operation on an original electroencephalogram signal to enhance robustness of a model to local signal loss, and then performing band-pass filtering and channel normalization to form a training sample; (2) Inputting the preprocessed electroencephalogram sample into a multi-branch graph convolution network, extracting multi-scale space-time embedded features through convolution kernels of different time scales, and fusing to f