CN-122020129-A - Unmanned aerial vehicle control method and system based on EEG and fNIRS multi-mode feature fusion
Abstract
The invention relates to the technical field of unmanned aerial vehicle control, in particular to an unmanned aerial vehicle control method and system based on EEG and fNIRS multi-mode feature fusion, which comprises the steps of synchronously acquiring scalp electroencephalogram and near infrared blood oxygen signals, extracting electroencephalogram frequency band power spectral density to construct a fast variable feature matrix, extracting oxygenated hemoglobin concentration variation time sequence features to construct a slow variable feature matrix, and compensating slow variable time stamps by utilizing a phase space reconstruction algorithm to realize alignment; the method comprises the steps of obtaining a fusion feature matrix by taking an aligned slow variable as a query matrix, taking a fast variable as a key matrix and a value matrix, fusing the key matrix and the value matrix through an attention mechanism, respectively decoding discrete instructions and continuous variables by inputting the fusion feature matrix into a double-branch network, splicing the discrete instructions and the continuous variables to generate an intended decoding instruction, and finally integrating concentration variation to obtain a cognitive load index, dynamically distributing weights of the intended decoding instruction and an underlying autonomous safety control law according to the cognitive load index, and generating a final instruction to be issued and executed. The invention solves the problems of low control dimension and easy out-of-control crash.
Inventors
- QI RENLONG
- MA MINGHUI
- LOU TAISHAN
- SHI LEI
- ZHU XIAOHUI
- WANG LEI
- ZHANG GUANGJU
- ZHANG YACHAO
Assignees
- 洛阳磐泰金属材料有限公司
- 郑州科技学院
Dates
- Publication Date
- 20260512
- Application Date
- 20260409
Claims (10)
- 1. The unmanned aerial vehicle control method based on EEG and fNIRS multi-mode feature fusion is characterized by comprising the following steps of: s1, synchronously acquiring scalp electroencephalogram signals and near infrared blood oxygen signals, and converting the near infrared blood oxygen signals into oxyhemoglobin concentration variation; S2, extracting frequency band power spectral density of the scalp electroencephalogram signal to construct a fast variable feature matrix, extracting time sequence features of the oxygenated hemoglobin concentration variation to construct a slow variable feature matrix, and compensating a time stamp of the slow variable feature matrix by using a phase space reconstruction algorithm to realize time alignment with the fast variable feature matrix; s3, mapping the aligned slow variable feature matrix into a query matrix, mapping the aligned fast variable feature matrix into a key matrix and a value matrix, and calculating the query matrix, the key matrix and the value matrix through an attention mechanism to obtain a fusion feature matrix; s4, inputting the fusion feature matrix into a dual-branch network, decoding discrete instructions by a first network branch, decoding continuous variables by a second network branch, and splicing the discrete instructions and the continuous variables to generate an intention decoding instruction; S5, integrating the oxyhemoglobin concentration variation to obtain a cognitive load index, distributing the execution weight of the intention decoding instruction and the bottom autonomous safety control law by using the cognitive load index, and summing to generate a final flight control instruction to be issued and executed.
- 2. The unmanned aerial vehicle control method based on the fusion of EEG and fNIRS multi-modal features according to claim 1, wherein in S1, converting the near-infrared blood oxygen signal into an oxygenated hemoglobin concentration variation comprises: Acquiring light attenuation variation quantity of the near infrared light of the first wavelength emitted by the near infrared detection equipment after the near infrared light of the second wavelength penetrates through brain tissues of an operator; extracting physical distance parameters of a light source and a detector of the near infrared detection equipment and differential path factor parameters; The deoxidized hemoglobin extinction coefficient and the oxygenated hemoglobin extinction coefficient respectively corresponding to the first wavelength near infrared light and the second wavelength near infrared light are called; and combining the physical distance parameter, the differential path factor parameter and the light attenuation variation, and performing simultaneous solving operation by using the deoxyhemoglobin extinction coefficient and the oxyhemoglobin extinction coefficient to extract the oxyhemoglobin concentration variation.
- 3. The unmanned aerial vehicle control method based on the fusion of EEG and fNIRS multi-modal features according to claim 1, wherein in S2, extracting the frequency band power spectral density of the scalp electroencephalogram signal to construct a fast variable feature matrix, extracting the time sequence features of the oxygenated hemoglobin concentration variation to construct a slow variable feature matrix comprises: Band-pass filtering is carried out on the scalp electroencephalogram signals, short-time Fourier transform is applied to the filtered scalp electroencephalogram signals, frequency band power spectral density in a set frequency band range is extracted, vector splicing is carried out on the frequency band power spectral density according to a time dimension, and the fast variable feature matrix is constructed; Setting a sliding time window length and a sliding step length for cutting the oxygenated hemoglobin concentration variation sequence, respectively calculating an average concentration amplitude characteristic and a concentration variation slope characteristic of the oxygenated hemoglobin concentration variation in the sliding time window, and combining the average concentration amplitude characteristic and the concentration variation slope characteristic to construct the slow variable characteristic matrix.
- 4. The unmanned aerial vehicle control method of claim 1, wherein compensating the time stamp of the slow variable feature matrix with a phase space reconstruction algorithm in S2, achieving time alignment with the fast variable feature matrix comprises: Determining a hemodynamic delay time inherent to the oxygenated hemoglobin concentration variation relative to the scalp electroencephalogram signal; Setting an embedding dimension and a delay time parameter of the phase space reconstruction algorithm, and mapping the slow variable feature matrix into a high-dimensional phase space; Calculating a phase shift compensation amount in the Gao Weixiang space according to the hemodynamic delay time; and reversely applying the phase shift compensation quantity to the time stamp of each sampling point of the slow variable characteristic matrix to obtain the compensated slow variable characteristic matrix, and strictly aligning the time stamp of the compensated slow variable characteristic matrix with the time stamp of the fast variable characteristic matrix.
- 5. The unmanned aerial vehicle control method of claim 1, wherein in S3, the computing the query matrix, the key matrix, and the value matrix via the attention mechanism to derive a fused feature matrix comprises: extracting a first weight matrix, a second weight matrix and a third weight matrix which are trained in advance; multiplying the aligned slow variable feature matrix with the first weight matrix to obtain the query matrix; Multiplying the aligned fast variable feature matrix with the second weight matrix and the third weight matrix to obtain the key matrix and the value matrix; calculating a dot product result of the query matrix and a transposed matrix of the key matrix, dividing the dot product result by a dimension scaling factor of a feature vector, and carrying out normalization processing by a Softmax function to obtain a cross-modal attention weight distribution matrix; And performing matrix multiplication operation on the cross-modal attention weight distribution matrix and the value matrix, and outputting the fusion feature matrix containing multi-modal complementary information.
- 6. The unmanned aerial vehicle control method of claim 1, wherein in S4, the first network branch decodes discrete instructions and the second network branch decodes continuous variables comprises: Inputting the fusion feature matrix into a hidden layer of the first network branch to extract a classification feature vector, performing discretization mapping on the classification feature vector by using a Softmax classification layer, outputting unmanned aerial vehicle landing state classification probability and horizontal yaw direction classification probability, and selecting a discrete instruction with highest probability Xiang Que; And synchronously inputting the fusion feature matrix into a hidden layer of the second network branch to extract a regression feature vector, performing fitting operation on the regression feature vector by utilizing a linear regression layer, outputting a predicted value, and respectively mapping the predicted value into an accelerator depth control amount of vertical climbing of the unmanned aerial vehicle and a pitch angle control amount of horizontal displacement to confirm the continuous variable.
- 7. The unmanned aerial vehicle control method of claim 1, wherein concatenating the discrete instructions with the continuous variable generating intent-to-decode instructions in S4 comprises: extracting the enumeration type state bit value mapped by the discrete instruction according to a preset communication protocol format of the unmanned aerial vehicle bottom flight control system; Extracting floating point type dynamic numerical values of the continuous variable mapping; and packaging the enumeration type state bit value and the floating point type dynamic value by a data frame structure body, adding a frame header check mark and a frame tail check mark, and generating the intention decoding instruction in a standard communication data packet format.
- 8. The unmanned aerial vehicle control method based on the fusion of EEG and fNIRS multi-modal features according to claim 1, wherein in S5, integrating the oxyhemoglobin concentration variation to derive a cognitive load index comprises: screening out target oxyhemoglobin concentration variation data of a space channel map belonging to a prefrontal cortex region of the brain according to the space channel topological distribution of the near infrared blood oxygen signal; Setting a sliding integration time period length for load assessment; Performing continuous time integral operation on the difference value of the target oxyhemoglobin concentration variation data relative to the resting state baseline concentration data in the sliding integral time period length; and extracting an integral result, executing maximum and minimum normalization processing, and taking a result mapped in a numerical range from zero to one as the cognitive load index.
- 9. The unmanned aerial vehicle control method of claim 1, wherein in S5, using the cognitive load index to assign execution weights for the intent-to-decode instruction and the underlying autonomous safety control law, summing to generate a final flight control instruction delivery execution comprises: Acquiring load sensitivity control parameters preset by a system; Performing multiplication operation by using the cognitive load index and the load sensitivity control parameter to obtain an autonomous control weight coefficient; the autonomous control weight coefficient is subtracted by the number one to calculate an artificial control weight coefficient; performing vector number multiplication operation on the artificial control weight coefficient and the intention decoding instruction to obtain an artificial control component vector; Performing vector number multiplication operation on the autonomous control weight coefficient and the bottom autonomous safety control law to obtain an autonomous defense component vector; And performing element-by-element summation operation on the artificial control component vector and the autonomous defensive component vector to generate the final flight control instruction.
- 10. Unmanned aerial vehicle control system based on EEG and fNIRS multi-modal feature fusion, characterized in that it is used in the unmanned aerial vehicle control method based on EEG and fNIRS multi-modal feature fusion according to any one of claims 1-9, comprising the following modules: The multi-mode brain signal synchronous acquisition module is used for synchronously acquiring scalp brain signals and near infrared blood oxygen signals and converting the near infrared blood oxygen signals into oxygenated hemoglobin concentration variation; The time-space feature cross-modal alignment module is used for extracting the frequency band power spectral density of the scalp electroencephalogram signal to construct a fast variable feature matrix, extracting the time sequence feature of the oxygenated hemoglobin concentration variation to construct a slow variable feature matrix; The cross attention feature fusion module is used for mapping the aligned slow variable feature matrix into a query matrix, mapping the aligned fast variable feature matrix into a key matrix and a value matrix, and calculating the query matrix, the key matrix and the value matrix through an attention mechanism to obtain a fusion feature matrix; The multi-dimensional intention decoding and mapping module is used for inputting the fusion feature matrix into a dual-branch network, decoding discrete instructions by a first network branch, decoding continuous variables by a second network branch, and splicing the discrete instructions and the continuous variables to generate intention decoding instructions; And the cognitive load self-adaptive flight control module is used for integrating the oxyhemoglobin concentration variation to obtain a cognitive load index, distributing the execution weight of the intention decoding instruction and the bottom autonomous safety control law by using the cognitive load index, and summing to generate a final flight control instruction for issuing and executing.
Description
Unmanned aerial vehicle control method and system based on EEG and fNIRS multi-mode feature fusion Technical Field The invention relates to the technical field of unmanned aerial vehicle control, in particular to an unmanned aerial vehicle control method and system based on EEG and fNIRS multi-mode feature fusion. Background In recent years, brain-computer interface technology is gradually applied to the field of unmanned aerial vehicle control, and a non-contact control means is provided for special operation scenes. However, existing unmanned aerial vehicle brain-control techniques still have shortcomings in coping with complex dynamic flight tasks. The traditional unmanned plane brain control links mostly adopt single brain electric signal acquisition, have poor noise immunity and are difficult to extract space positioning information, partial improvement schemes try to directly and physically splice heterogeneous multi-mode characteristics such as brain electric signals, blood oxygen and the like, but because the brain electric signals (millisecond-level response) and the blood oxygen signals (few seconds-level delay) have inherent characteristic space-time dislocation on a physical layer, the characteristics are mutually interfered by the simple splicing. In addition, the existing systems are mostly open-loop control logic, and lack quantitative monitoring and active defense mechanisms for deep cognitive fatigue states of operators. The defects commonly lead to the unmanned aerial vehicle lacking high-dimensional control capability of carrying out fine tuning on three-dimensional space continuous variables, and in the state of mental overload of an operator, the error brain control instruction is easy to be forced to be executed due to the characteristic analysis error, and finally, the accident of disordered flight track and out-of-control crash is caused. In view of the above, the prior art fails to systematically address the core difficulties of efficient fusion of spatio-temporal heterogeneity features, synchronous decoding of multidimensional control commands, and closed-loop monitoring of operator physiological states. Disclosure of Invention In order to make up for the defects, the invention provides an unmanned aerial vehicle control method and system based on EEG and fNIRS multi-mode feature fusion, and aims to solve the problems of low control dimension and easy out-of-control crash caused by multi-mode feature space-time dislocation, limited control dimension and lack of closed-loop monitoring of physiological states in the prior art. In a first aspect, the present invention provides a method for controlling an unmanned aerial vehicle based on fusion of EEG and fNIRS multi-modal features, comprising the steps of: s1, synchronously acquiring scalp electroencephalogram signals and near infrared blood oxygen signals, and converting the near infrared blood oxygen signals into oxyhemoglobin concentration variation; S2, extracting frequency band power spectral density of the scalp electroencephalogram signal to construct a fast variable feature matrix, extracting time sequence features of the oxygenated hemoglobin concentration variation to construct a slow variable feature matrix, and compensating a time stamp of the slow variable feature matrix by using a phase space reconstruction algorithm to realize time alignment with the fast variable feature matrix; s3, mapping the aligned slow variable feature matrix into a query matrix, mapping the aligned fast variable feature matrix into a key matrix and a value matrix, and calculating the query matrix, the key matrix and the value matrix through an attention mechanism to obtain a fusion feature matrix; s4, inputting the fusion feature matrix into a dual-branch network, decoding discrete instructions by a first network branch, decoding continuous variables by a second network branch, and splicing the discrete instructions and the continuous variables to generate an intention decoding instruction; S5, integrating the oxyhemoglobin concentration variation to obtain a cognitive load index, distributing the execution weight of the intention decoding instruction and the bottom autonomous safety control law by using the cognitive load index, and summing to generate a final flight control instruction to be issued and executed. By adopting the technical scheme, the phase space is reconstructed to align the fast and slow characteristics and the fast and slow characteristics are deeply fused through an attention mechanism, so that the multi-dimensional intention of double-branch decoding is further realized, and the weight is self-adaptively distributed based on the load, thereby solving the problems of low control dimension and easy out-of-control crash in the prior art. Optionally, in S1, converting the near infrared blood oxygen signal into an oxygenated hemoglobin concentration variation comprises: Acquiring light attenuation variation quantity of the near i