CN-121300628-B - Mixed brain-computer interaction system and method oriented to air-ground cooperative robot control
Abstract
The invention discloses a hybrid brain-computer interaction system and method for air-ground cooperative robot control, which belong to the field of brain-computer fusion and human-computer interaction science. According to the invention, through the man-machine interaction system based on the hybrid brain-computer interface, the hands-free operation behavior of an operator on the ground robot can be realized.
Inventors
- BI LUZHENG
- LIAN KAIXUAN
- Ge Haorui
- FEI WEIJIE
Assignees
- 北京理工大学
Dates
- Publication Date
- 20260508
- Application Date
- 20251015
Claims (3)
- 1. A hybrid brain-computer interaction system for air-ground cooperative robot control is characterized by comprising: the electroencephalogram acquisition and analysis module is used for presenting visual stimulus to a manipulator, acquiring an electroencephalogram signal of the manipulator, and analyzing the electroencephalogram signal to obtain an electroencephalogram analysis command; the electroencephalogram acquisition and analysis module comprises: The visual stimulation unit is used for presenting a plurality of visual stimulations with different frequencies, and each visual stimulus corresponds to a command for finely controlling the ground robot; The signal acquisition unit is used for acquiring the brain electrical signals of the operator; The signal analysis unit is used for decoding the electroencephalogram signal by using a filter bank typical correlation analysis FBCCA method to obtain an electroencephalogram analysis command; The eye movement tracking module is used for acquiring eye movement signals of the operator and obtaining eye movement coordinates corresponding to the eye movement signals; the fine control subsystem is used for carrying out decision layer fusion on the brain electrolysis command and the eye movement coordinate to generate a control command for controlling the ground robot; the fine control subsystem includes: the statistical analysis unit is used for carrying out statistical analysis on the eye movement coordinates by using a sliding window which is the same as the electroencephalogram signal decoding step length; The control command output module is used for analyzing the electroencephalogram signal decoding result and the eye movement coordinate, fusing a decision layer and outputting a fine control command, wherein the fine control command comprises starting and stopping, accelerating, decelerating, turning left and turning right; the decision layer fusion comprises the following steps: When the last control command is non-control, judging that the current control state is non-control state, and only judging an eye movement control area and outputting an eye movement tracking window length command of 0.25s at the moment; When the last control command is one of five control commands, judging that the current control state is the current control state, and comprehensively considering the eye movement control area, the eye movement 0.25s window length, the eye movement 0.5s window length and the SSVEP correlation coefficient; the video point selection navigation subsystem is used for receiving returned videos from the ground robot and the aerial robot, constructing navigation interaction based on the returned videos according to a control command, generating a navigation target point sent to the ground robot, and realizing global navigation of the ground robot; The video click navigation subsystem comprises: the coordinate recording unit is used for decoding the confirmation selection point intention of the brain electricity and recording the eye movement coordinate at the moment; the coordinate calculation unit is used for counting camera internal parameters of the ground robot and the aerial robot and calculating the ground coordinates pointed by the video coordinate points; the coordinate mapping unit is used for calculating the relative coordinate relation between the aerial robot and the ground robot and mapping the ground coordinate into a target point coordinate under the ground robot coordinate system; and the navigation unit is used for performing global navigation by using the navigation system of the ground robot.
- 2. The system of claim 1, further comprising a human-machine interaction subsystem: The fine control interface is used for displaying the feedback video, the visual stimulus and the command feedback identification; the video point selection navigation interface is used for displaying the returned video, the navigation point visual identification and the navigation state identification; the function switching interface is used for providing functions of controlling mode switching, video source switching and interface layout switching; And the state prompt component is used for displaying the system connection state and the task state.
- 3. A hybrid brain-computer interaction method oriented to air-ground cooperative robot control, characterized by being used for implementing the system of any one of claims 1-2, comprising the following steps: Visual stimulus is presented to a manipulator through a display device, an electroencephalogram signal of the manipulator is acquired through an electroencephalogram acquisition device, and the electroencephalogram signal is analyzed to obtain an electroencephalogram analysis command; collecting eye movement signals of the operator through an eye movement instrument, and obtaining eye movement coordinates corresponding to the eye movement signals; a decision layer fusion mode is adopted to fuse the brain electrolysis command and the eye movement coordinate, and a control command for controlling the ground robot is generated; And receiving returned videos from the ground robot and the aerial robot, constructing navigation interaction based on the returned videos according to a control command, generating a navigation target point sent to the ground robot, and realizing global navigation of the ground robot.
Description
Mixed brain-computer interaction system and method oriented to air-ground cooperative robot control Technical Field The invention belongs to the technical field of brain-computer fusion and man-machine interaction science, and particularly relates to a hybrid brain-computer interaction system and method for air-ground cooperative robot control. Background In recent years, with high flexibility and good rapidity, an air-ground cooperative system plays an important role in transportation, search and rescue tasks. Compared with the traditional single robot system, the air-ground cooperative system integrates complementary advantages of an air robot and a ground robot, and overall performance of the system is remarkably improved. However, in the current research about the automatic control of the air-ground cooperative system, the machine intelligence cannot realize the emergency control under abnormal conditions and the independent decision under complex tasks, so the control function of human intervention is indispensable in the air-ground cooperative system. In the existing robot system, the control actions of the manipulator are all oriented to a single operation object, i.e. the manipulator can only operate a single robot to complete the object. If the collaborative operation interaction of a plurality of robots can be controlled by a single person, the work efficiency of operators can be improved, and errors caused by incomplete communication among the operators can be avoided. Aiming at the complex control requirement of the air-ground cooperative system, a single person is difficult to realize complete control of the system by virtue of manual control, other efficient control modes need to be explored, and the control precision of the air-ground cooperative system is further improved by adopting a reasonable interaction mode. Disclosure of Invention In order to solve the technical problems, the invention provides a hybrid brain-computer interaction system and a hybrid brain-computer interaction method for controlling an air-ground cooperative robot, which are used for solving the problems in the prior art. To achieve the above object, in a first aspect, the present invention provides a hybrid brain-computer interaction system for air-ground cooperative robot manipulation, including: the electroencephalogram acquisition and analysis module is used for presenting visual stimulus to a manipulator, acquiring an electroencephalogram signal of the manipulator, and analyzing the electroencephalogram signal to obtain an electroencephalogram analysis command; The eye movement tracking module is used for acquiring eye movement signals of the operator and obtaining eye movement coordinates corresponding to the eye movement signals; the fine control subsystem is used for carrying out decision layer fusion on the brain electrolysis command and the eye movement coordinate to generate a control command for controlling the ground robot; the video point selection navigation subsystem is used for receiving returned videos from the ground robot and the aerial robot, constructing navigation interaction based on the returned videos according to a control command, generating a navigation target point sent to the ground robot, and realizing global navigation of the ground robot. Preferably, the electroencephalogram acquisition and analysis module comprises: The visual stimulation unit is used for presenting a plurality of visual stimulations with different frequencies, and each visual stimulus corresponds to a command for finely controlling the ground robot; The signal acquisition unit is used for acquiring the brain electrical signals of the operator; And the signal analysis unit is used for decoding the electroencephalogram signal by using a filter bank typical correlation analysis FBCCA method to obtain an electroencephalogram analysis command. Preferably, the fine control subsystem comprises: the statistical analysis unit is used for carrying out statistical analysis on the eye movement coordinates by using a sliding window which is the same as the electroencephalogram signal decoding step length; and the control command output module is used for analyzing the electroencephalogram signal decoding result and the eye movement coordinate, carrying out decision layer fusion and outputting a fine control command. Preferably, the fine control commands include start stop, acceleration, deceleration, left turn, right turn. Preferably, the decision layer fusion comprises: When the last control command is non-control, judging that the current control state is non-control state, and only judging an eye movement control area and outputting an eye movement tracking window length command of 0.25s at the moment; When the last control command is one of five control commands, judging that the current control state is the control state, and comprehensively considering the eye movement control area, the eye movement 0.25s window length, the