CN-121996068-A - Eye movement tracking-based self-adaptive SSMVEP-MI fusion brain-computer interface method and system
Abstract
The invention provides a self-adaptive SSMVEP-MI fusion brain-computer interface method and a system based on eye tracking, wherein the method comprises the steps of determining the current effective gazing area of a user in real time through eye tracking; based on the region, only a corresponding local SSMVEP visual stimulation target is activated, and associated specific motor imagery prompt information is generated, brain electrical signals are synchronously collected, SSMVEP features and MI features are extracted, a decoding path is adaptively selected according to whether the number of accumulated labeled MI effective samples reaches a threshold value, SSMVEP feature decoding is independently used if the number of accumulated labeled MI effective samples does not reach the threshold value, fusion decoding based on signal quality evaluation is conducted on bimodal features if the number of accumulated labeled MI effective samples does not reach the threshold value, and finally a control signal is output to drive external equipment or update an interactive interface. The invention obviously reduces visual interference, improves the recognition accuracy and training efficiency in the early stage of motor imagery, and ensures the robustness and usability of the system through a double self-adaptive mechanism of the number of samples and the signal quality.
Inventors
- HE YONGZHENG
- CAO JIAYI
- SONG AIXIA
- Shi Yunchuan
- WANG PENGGANG
Assignees
- 河南翔宇医疗设备股份有限公司
Dates
- Publication Date
- 20260508
- Application Date
- 20260119
Claims (10)
- 1. An eye movement tracking-based self-adaptive SSMVEP-MI fusion brain-computer interface method is characterized by comprising the following steps: acquiring the gaze point coordinates of a user in real time, mapping the gaze point coordinates to at least three preset gaze areas, and determining a current effective gaze area; In response to the current effective gazing area, executing local stimulation control, namely activating only one local SSMVEP visual stimulation target corresponding to the current effective gazing area and generating specific motor imagery prompt information associated with the area at the same time, wherein the SSMVEP visual stimulation target of the non-current effective gazing area is in an inactive state; Synchronously acquiring brain electrical signals of a user; Extracting SSMVEP features related to the local SSMVEP visual stimulus target and MI features related to the motor imagery prompt information from the electroencephalogram signals respectively; judging whether the number of collected MI effective samples with the motor imagery prompt information as a label reaches a preset threshold value or not; Performing adaptive decoding according to the judgment result: If the number of MI valid samples does not reach a preset threshold, decoding based on SSMVEP features to generate a first control instruction; if the number of the MI effective samples reaches a preset threshold, decoding based on a fusion result of the SSMVEP features and the MI features to generate a second control instruction; and outputting a control signal to drive the associated external equipment or update the interactive interface according to the first control instruction or the second control instruction.
- 2. The eye-tracking based adaptive SSMVEP-MI fusion brain-computer interface method according to claim 1, wherein decoding the fusion result based on SSMVEP features and MI features comprises: Respectively acquiring a first identification result and a first quality index thereof based on the SSMVEP features, and a second identification result and a second quality index thereof based on the MI features; And dynamically adjusting the weights of the first recognition result and the second recognition result in a final decision according to the first quality index and the second quality index, and carrying out weighted fusion.
- 3. The eye-tracking based adaptive SSMVEP-MI fusion brain-computer interface method according to claim 2, further comprising an adaptive mode switching step of: continuously monitoring the first quality index and the second quality index; Suspending activation of the local SSMVEP visual-stimulus target and switching to a mode of decoding based on the MI feature when the first quality indicator continues to be below a first switching threshold; And switching to a mode for decoding based on the SSMVEP features when the second quality indicator continues to be below a second switching threshold.
- 4. An eye-tracking based adaptive SSMVEP-MI fusion brain-computer interface method according to claim 2 or 3, wherein the first quality indicator comprises a signal-to-noise ratio or classification confidence of the SSMVEP features and the second quality indicator comprises a classification confidence of the MI features.
- 5. The method of claim 1, wherein the particular motor imagery prompt message is at least one of a visual prompt, an auditory prompt, or a tactile prompt for directing a user to perform a particular motor imagery task associated with the current active gaze area.
- 6. An eye-tracking based adaptive SSMVEP-MI fusion brain-computer interface system for implementing an eye-tracking based adaptive SSMVEP-MI fusion brain-computer interface method according to any one of claims 1-5, the system comprising: The eye movement tracking module is configured to acquire the fixation point coordinates of the user in real time; The gazing area judging module is configured to map the gazing point coordinates to at least three preset gazing areas and output a current effective gazing area; The stimulation presentation and control module is configured to respond to the current effective gazing area, activate only one local SSMVEP visual stimulation target corresponding to the current effective gazing area and generate associated specific motor imagery prompt information, wherein the SSMVEP visual stimulation target of the non-current effective gazing area is in an inactive state; The electroencephalogram acquisition module is configured to acquire electroencephalogram signals of a user; A feature extraction module configured to extract SSMVEP features and MI features, respectively, from the electroencephalogram signals; The adaptive fusion decoding module is configured to: recording and judging whether the number of MI effective samples with the motor imagery prompt information as a label reaches a preset threshold value or not; Decoding based on the SSMVEP features to generate a first control instruction when the number of MI valid samples does not reach a preset threshold; decoding based on the result of fusion of the SSMVEP features and the MI features when the number of the MI effective samples reaches a preset threshold value, and generating a second control instruction; And the control output module is configured to output a control signal so as to drive the associated external equipment or update the interactive interface according to the first control instruction or the second control instruction.
- 7. The eye-tracking based adaptive SSMVEP-MI fusion brain-computer interface system according to claim 6, wherein the adaptive fusion decoding module comprises a quality assessment unit and a fusion decision unit; The quality evaluation unit is configured to evaluate a first quality index corresponding to the SSMVEP features and a second quality index corresponding to the MI features; the fusion decision unit is configured to acquire a first identification result based on the SSMVEP features and a second identification result based on the MI features when the number of MI effective samples reaches a preset threshold, and to perform weighted fusion on the first quality index and the second quality index.
- 8. The eye-tracking based adaptive SSMVEP-MI fusion brain-computer interface system according to claim 7, wherein the adaptive fusion decoding module is further configured to perform adaptive mode switching: When the first quality indicator is continuously below a first switching threshold, sending an instruction to the stimulus presentation and control module to pause SSMVEP the visual stimulus and control the fusion decision unit to switch to a mode of decoding based on the MI feature; And when the second quality index is continuously lower than a second switching threshold value, controlling the fusion decision unit to switch to a mode of decoding based on the SSMVEP features.
- 9. The eye-tracking based adaptive SSMVEP-MI fusion brain-computer interface system according to claim 6, further comprising a motor imagery prompt module configured to generate the specific motor imagery prompt in visual, audible, or tactile form according to instructions of the stimulus presentation and control module.
- 10. The eye-tracking based adaptive SSMVEP-MI fusion brain-computer interface system according to claim 6, wherein the adaptive fusion decoding module is configured to store synchronously acquired brain electrical signal segments in association with motor imagery task labels determined by the current effective gaze area as one MI valid sample for accumulation each time the local SSMVEP visual stimulus target is activated and brain electrical signal acquisition is valid.
Description
Eye movement tracking-based self-adaptive SSMVEP-MI fusion brain-computer interface method and system Technical Field The invention relates to the technical field of brain-computer interfaces, in particular to a self-adaptive SSMVEP-MI fusion brain-computer interface method and system based on eye tracking Background Cerebral stroke, traumatic brain injury and other nervous system diseases often cause impaired limb movement function of patients, severely affecting quality of life. The core goal of rehabilitation is to stimulate motor nerve pathway through brain active participation and promote nerve plasticity and motor function recovery. In recent years, brain-computer interface technology is used as a novel means of nerve rehabilitation, and motion intention recognition and limb control are realized by decoding brain electrical signals of patients, so that a direct nerve feedback means is provided for rehabilitation training. The existing rehabilitation brain-computer interface is mainly based on the following signal paths: The steady-state visual evoked potential (SSVEP/SSMVEP) brain-computer interface induces brain-computer signals through fixed frequency flicker or motion visual stimulus, has stable signals, strong response and high recognition accuracy, and is suitable for most people. However, conventional SSVEPs typically employ full-screen or double-sided multi-target scintillation stimulation, suffer from cross-target interference, are prone to visual fatigue, and have limited stimulation effects on motor function when the patient lacks active motor intent. Motor Imagery (MI) brain-computer interface-the user imagines limb movements through the brain without producing actual movements, and the EEG can capture corresponding brain electrical characteristics for classification. The method can activate brain movement region, improve nerve plasticity, and is suitable for rehabilitation training. However, the method generally requires long-time training data acquisition, requires the patient to keep a resting state, avoids myoelectric artifacts, causes remarkable fatigue, has low initial recognition accuracy, and is easy to influence the confidence of the patient. Multimodal fusion attempts part of the study attempted to fuse SSVEP with MI, guide motor imagery through visual stimuli, improve MI separability. But has the problems of large full screen/bilateral stimulation interference, distraction, no self-adaptive control on an eye movement injection visual area, insufficient MI characterization distinction, lack of plug and play capability for an initial rehabilitation patient, need of additional training and the like. In summary, the prior art has the disadvantages of large interference of visual stimulus and easy fatigue, low accuracy in early stage of motion image recognition and time-consuming training, and lack of fixation-based adaptive stimulus design to fully guide brain region activation. Therefore, a new method and system capable of implementing high-accuracy side MI decoding and SSMVEP fusion control under low-interference, low-fatigue, plug-and-play conditions is urgently needed. Disclosure of Invention The invention aims to provide a self-adaptive SSMVEP-MI fusion brain-computer interface method and system based on eye movement tracking, which can solve the existing problems. The technical problems solved by the invention at least comprise: 1. how to improve the accuracy of initial Motor Imagery (MI) recognition: in the traditional MI brain-computer interface, the characteristics of the brain-electrical signals of the initial user are not obvious, and the classification accuracy is usually only about five to six, so that the operation is unreliable, and the rehabilitation training effect is influenced. Therefore, the invention aims to solve the problem of improving the separability and the recognition accuracy of MI signals at the initial stage of training by utilizing the visual attention guiding and local stimulation strategies so as to realize the plug-and-play brain-computer interface. 2. How to reduce interference and fatigue caused by visual stimulus: Traditional SSVEP full screen or multi-sided target blinking is prone to cross-target interference and visual fatigue, and may even create epileptic risk for some patients. The invention aims to solve the problem of how to adaptively activate local SSMVEP stimulation according to the user fixation side, thereby reducing the stimulation quantity and interference and improving the comfort level and the signal stability. 3. How to shorten MI training sample acquisition time and efficiently acquire tagged data: Conventional MI data acquisition phases typically last fifteen to twenty minutes or longer, requiring the user to remain stationary and repeat imaginative actions, a process that is time consuming and tedious. The invention aims to solve the problem of how to combine SSMVEP classification results with a gazing side to realize synchro