Search

CN-121326139-B - Method, device and equipment for issuing language-motion united imagination brain-computer interface instruction

CN121326139BCN 121326139 BCN121326139 BCN 121326139BCN-121326139-B

Abstract

The invention provides a method, a device and equipment for issuing a brain-computer interface command of language-motion joint imagination, wherein the method comprises the steps of acquiring magnetoencephalography data of a user in a language-motion joint imagination task, inputting the magnetoencephalography data into a language-motion joint decoding model to acquire a motion intention result, and further issuing the brain-computer interface command. The self-adaptive position coding module in the language-motion joint decoding model recognizes the cognitive state in the brain magnetic data, and extracts the motion imagination soft boundary feature, the semantic prompt soft boundary feature and the joint soft boundary feature respectively, wherein the features provide accurate boundary information for the model and help to distinguish different imagination tasks. The motor imagery decoding module in the model extracts time-frequency characteristics by utilizing the motor imagery soft boundary characteristics, and the semantic decoding module in the model extracts discrete semantic coding sequences based on the semantic prompt soft boundary characteristics. The joint decoding module synthesizes the characteristics to carry out intended decoding, so that the differentiation degree of different imagination tasks and the accuracy of intended decoding are obviously improved.

Inventors

  • QIU SHUANG
  • ZHANG CHUNCHENG
  • HE HUIGUANG
  • WANG FAN
  • ZHUO YAN

Assignees

  • 中国科学院自动化研究所
  • 中国科学院生物物理研究所

Dates

Publication Date
20260505
Application Date
20250822

Claims (8)

  1. 1. The method for issuing the interface command of the brain-computer by combining language and motion is characterized by comprising the following steps of: acquiring magnetoencephalography data of a user under a language-motion joint imagination task; Inputting the magnetoencephalography data into a language motion joint decoding model to obtain a motion intention result output by the language motion joint decoding model; Based on the movement intention result, issuing a brain-computer interface instruction; the language-motion joint decoding model comprises a self-adaptive position coding module, a motor imagery decoding module, a semantic decoding module and a joint decoding module; The self-adaptive position coding module is used for recognizing cognitive states of the magnetoencephalography data, respectively extracting a motor imagery soft boundary feature, a semantic prompt soft boundary feature and a joint soft boundary feature, wherein the motor imagery decoding module is used for extracting motor imagery time-frequency features from the magnetoencephalography data based on the motor imagery soft boundary feature, extracting discrete semantic coding sequence features related to semantic instructions in the language motor joint imagery task from the magnetoencephalography data based on the semantic prompt soft boundary feature, and carrying out intention decoding on the motor imagery time-frequency features and the discrete semantic coding sequence features based on the joint soft boundary feature to obtain a motor intention result; The self-adaptive position coding module is used for recognizing the cognitive state of the magnetoencephalography data to obtain a first probability weight indicating a prompting stage as a characteristic weight of the semantic prompting soft boundary feature, a second probability weight indicating a motor imagery stage as a characteristic weight of the motor imagery soft boundary feature and a third probability weight indicating a joint synchronization stage as a characteristic weight of the joint soft boundary feature; the step of determining the discrete semantic coding sequence features comprises the following steps: Channel fusion and frequency band extraction are respectively carried out on the magnetoencephalography data, so that channel fusion characteristics and second frequency band characteristics are obtained; Probability weighting is carried out on the feature weights of the semantic prompt soft boundary features, so that second time sequence position information is obtained; Inputting the channel fusion feature, the second frequency band feature and the second time sequence position information into a semantic attention module to obtain a pseudo time sequence feature output by the semantic attention module; performing sparse representation compression on the semantic space in the language-motion joint imagination task to obtain a quantization space comprising a preset number of feature codes; And inputting the pseudo time sequence features into a self-encoder module for quantization by adopting the variation of the quantization space to obtain quantization features, and decoding the quantization features based on the second time sequence position information to obtain the discrete semantic coding sequence features.
  2. 2. The language-motor-driven united imagination brain-computer interface instruction issuing method according to claim 1, characterized in that the step of determining the motor-driven imagination time-frequency characteristics includes: Performing frequency band extraction on the magnetoencephalography data to obtain a first frequency band characteristic; Probability weighting is carried out on the feature weights of the motor imagery soft boundary features to obtain first time sequence position information; And inputting the first frequency band characteristic and the first time sequence position information into a time-frequency attention module to obtain the motor imagery time-frequency characteristic output by the time-frequency attention module.
  3. 3. The language-motor-driven joint-imagery brain-computer-interface instruction issuing method according to any one of claims 1 to 2, wherein the performing intent decoding on the motor imagery time-frequency features and the discrete semantic coding sequence features based on the joint soft boundary features to obtain the motor intent result includes: According to the joint soft boundary characteristics, time alignment is carried out on the motor imagery time-frequency characteristics and the discrete semantic coding sequence characteristics, and alignment motor imagery characteristics and alignment semantic characteristics are obtained; nonlinear mapping is carried out on the aligned motor imagery feature and the aligned semantic feature to obtain nonlinear mapping features, tensor generation operation is carried out on the nonlinear mapping features to obtain three-dimensional target features, wherein the three-dimensional target features comprise the aligned motor imagery feature, the aligned semantic feature and the time feature; Dimension merging is carried out on the aligned motor imagery feature and the aligned semantic feature in the three-dimensional target feature to obtain a two-dimensional target feature, wherein the two-dimensional target feature reserves the dimension of the time feature; And carrying out intention decoding on the two-dimensional target features to obtain the movement intention result.
  4. 4. The language-motor-united imagination brain-computer interface instruction issuing method according to any one of claims 1 to 2, characterized in that the acquiring of the magnetoencephalography data of the user under the language-motor-united imagination task includes: acquiring original magnetoencephalography data by using an atomic magnetometer array; preprocessing the original magnetoencephalography data to obtain magnetoencephalography data; The preprocessing operation includes at least one of adaptive noise cancellation, band pass filtering, power frequency interference cancellation, and independent component analysis based on a reference sensor.
  5. 5. The language-motion joint-imagination brain-computer interface instruction issuing method according to any one of claims 1 to 2, characterized in that the training step of the language-motion joint-decoding model includes: Determining an initial language movement joint decoding model, and acquiring sample magnetoencephalic data and a movement intention label, wherein the sample magnetoencephalic data is acquired offline on the basis of action prompts corresponding to a movement imagination experimental paradigm, and the movement imagination experimental paradigm is an imagination of a basic movement direction in a three-dimensional space, and the basic movement direction comprises forward pushing, backward pulling, leftward moving, rightward moving, upward lifting and downward pressing; inputting the sample brain magnetic data into the initial language motion joint decoding model for intention decoding to obtain a motion intention prediction result output by the initial language motion joint decoding model; And determining a classification loss based on the difference between the motion intention prediction result and the motion intention label, and training the initial language motion joint decoding model based on the classification loss to obtain the language motion joint decoding model.
  6. 6. The device for issuing the interface command of the brain-computer by combining language and motion is characterized by comprising the following components: the acquisition unit is used for acquiring the magnetoencephalography data of the user under the language-motion joint imagination task; The decoding unit is used for inputting the magnetoencephalography data into a language motion joint decoding model to obtain a motion intention result output by the language motion joint decoding model; the instruction issuing unit is used for issuing brain-computer interface instructions based on the exercise intention result; the language-motion joint decoding model comprises a self-adaptive position coding module, a motor imagery decoding module, a semantic decoding module and a joint decoding module; The self-adaptive position coding module is used for recognizing cognitive states of the magnetoencephalography data, respectively extracting a motor imagery soft boundary feature, a semantic prompt soft boundary feature and a joint soft boundary feature, wherein the motor imagery decoding module is used for extracting motor imagery time-frequency features from the magnetoencephalography data based on the motor imagery soft boundary feature, extracting discrete semantic coding sequence features related to semantic instructions in the language motor joint imagery task from the magnetoencephalography data based on the semantic prompt soft boundary feature, and carrying out intention decoding on the motor imagery time-frequency features and the discrete semantic coding sequence features based on the joint soft boundary feature to obtain a motor intention result; The self-adaptive position coding module is used for recognizing the cognitive state of the magnetoencephalography data to obtain a first probability weight indicating a prompting stage as a characteristic weight of the semantic prompting soft boundary feature, a second probability weight indicating a motor imagery stage as a characteristic weight of the motor imagery soft boundary feature and a third probability weight indicating a joint synchronization stage as a characteristic weight of the joint soft boundary feature; the device further comprises a determining unit, which is specifically used for: Channel fusion and frequency band extraction are respectively carried out on the magnetoencephalography data, so that channel fusion characteristics and second frequency band characteristics are obtained; Probability weighting is carried out on the feature weights of the semantic prompt soft boundary features, so that second time sequence position information is obtained; Inputting the channel fusion feature, the second frequency band feature and the second time sequence position information into a semantic attention module to obtain a pseudo time sequence feature output by the semantic attention module; performing sparse representation compression on the semantic space in the language-motion joint imagination task to obtain a quantization space comprising a preset number of feature codes; And inputting the pseudo time sequence features into a self-encoder module for quantization by adopting the variation of the quantization space to obtain quantization features, and decoding the quantization features based on the second time sequence position information to obtain the discrete semantic coding sequence features.
  7. 7. An electronic device comprising a memory, a processor and a computer program stored on the memory and running on the processor, wherein the processor implements the language-motor joint-imagination brain-computer interface instruction issuing method according to any one of claims 1 to 5 when executing the computer program.
  8. 8. A non-transitory computer readable storage medium having stored thereon a computer program, wherein the computer program when executed by a processor implements the language-motor joint-imagination brain-computer interface instruction issuing method according to any one of claims 1 to 5.

Description

Method, device and equipment for issuing language-motion united imagination brain-computer interface instruction Technical Field The invention relates to the technical field of brain-computer interfaces, in particular to a method, a device and equipment for issuing language-motion united imagination brain-computer interface instructions. Background Brain-computer interface (Brain-Computer Interface, BCI) technology provides an important functional compensation means for dyskinesia patients by decoding Brain neural activity to achieve direct control of external devices. The brain-computer interface system based on motor imagery has unique advantages in the fields of rehabilitation and medical treatment and the like because of the characteristics that the brain-computer interface system does not need external stimulation and can reflect autonomous consciousness of a user. The current mainstream non-invasive brain-computer interface mainly relies on electroencephalogram signal (Electroencephalography, EEG) acquisition, and has the advantages of high time resolution, portability of equipment and the like, but is limited by the volume conduction effect of scalp tissues, and the spatial resolution is usually only in the centimeter level, so that the fine exercise intention is difficult to distinguish. In particular, in the imagination task of multi-joint cooperative movement, the classification accuracy of the traditional EEG-BCI system is limited by the phenomenon of cognitive disunion, and the application of the traditional EEG-BCI system in complex control scenes is severely limited. The magnetoencephalic signals (Magnetoencephalography, MEG) can achieve millimeter-scale spatial positioning accuracy by measuring the magnetic field changes generated by neuron activity, and are not affected by scalp tissue impedance. The traditional superconducting quantum interferometer (Superconducting Quantum INTERFERENCE DEVICE, SQUID) MEG equipment has excellent performance, but needs liquid helium cooling and limits the movement of the head to be tested, so that the equipment is difficult to put into practical application. The atomic magnetometer (Optically Pumped Magnetometer, OPM) technology developed in recent years breaks through this limitation, and realizes the wearable performance while maintaining the sub-millimeter spatial resolution. The OPM-MEG system adopts the principle of an optical pump magnetometer, can work at room temperature, and provides a new technical path for motor imagery brain-computer interfaces. The system has the characteristics of fine coding, high decoding precision and the like facing to the practical application requirement of a motor imagery brain-computer interface system. However, the feature similarity of different motor imagery categories of a single motor imagery pattern is high, the degree of distinction is small, and the decoding difficulty is high. Therefore, the current coding paradigm and decoding method restrict the practical application of the motor imagery brain-computer interface, and development of a novel interaction paradigm and decoding algorithm based on OPM is needed to break through the technical bottlenecks. Disclosure of Invention The invention provides a method, a device and equipment for issuing a language-motor combined imagination brain-computer interface instruction, which are used for solving the defects of high decoding difficulty caused by high feature similarity of different motor imagination categories of a single motor imagination paradigm and small distinguishing degree in the prior art. The invention provides a language-motion joint imagination brain-computer interface instruction issuing method, which comprises the following steps: acquiring magnetoencephalography data of a user under a language-motion joint imagination task; Inputting the magnetoencephalography data into a language motion joint decoding model to obtain a motion intention result output by the language motion joint decoding model; Based on the movement intention result, issuing a brain-computer interface instruction; the language-motion joint decoding model comprises a self-adaptive position coding module, a motor imagery decoding module, a semantic decoding module and a joint decoding module; The self-adaptive position coding module is used for recognizing cognitive states of the magnetoencephalography data, extracting soft boundary features of motor imagery, semantic prompt soft boundary features and joint soft boundary features respectively, extracting time-frequency features of motor imagery from the magnetoencephalography data based on the soft boundary features of motor imagery, extracting discrete semantic coding sequence features related to semantic instructions in the joint task of language and motor imagery from the magnetoencephalography data based on the soft boundary features of semantic prompt, and carrying out intention decoding on the time-frequency features of motor image