Search

CN-121765648-B - State emotion classification method, system and model training method

CN121765648BCN 121765648 BCN121765648 BCN 121765648BCN-121765648-B

Abstract

The invention provides a state emotion classification method, a state emotion classification system and a model training method, which can be applied to the technical field of state emotion classification. The method comprises the steps of encoding a physiological characteristic map to obtain physiological encoding data, wherein the physiological characteristic map is constructed according to physiological characteristic data extracted from physiological signal data of a target object in a plurality of physiological channels, physiological characteristic nodes in the physiological characteristic map represent the physiological characteristic data of the physiological channels, encoding an electroencephalogram characteristic map to obtain electroencephalogram encoding data, wherein the electroencephalogram characteristic map is constructed according to electroencephalogram characteristic data extracted from electroencephalogram characteristic data of the target object in the plurality of electroencephalogram channels, the electroencephalogram characteristic nodes of the electroencephalogram characteristic map represent the electroencephalogram characteristic data of the electroencephalogram channels, fusing the physiological encoding data and the electroencephalogram encoding data to obtain multi-modal fusion encoding data, and obtaining a state classification result and an emotion classification result of the target object according to the multi-modal fusion encoding data.

Inventors

  • GAO ZHONGKE
  • LI HAOYU
  • HAO YUSHI
  • CUI XIAONAN

Assignees

  • 天津大学

Dates

Publication Date
20260505
Application Date
20260228

Claims (9)

  1. 1. A method of classifying a state emotion, comprising: encoding a physiological characteristic map to obtain physiological encoding data, wherein the physiological characteristic map is constructed according to physiological characteristic data extracted from physiological signal data of a target object in each of a plurality of physiological channels, physiological characteristic nodes in the physiological characteristic map represent the physiological characteristic data of the physiological channels, and physiological characteristic edge weights of the physiological characteristic map represent the degree of association between the physiological characteristic data corresponding to each of two connected physiological characteristic nodes; Encoding an electroencephalogram feature map to obtain electroencephalogram encoded data, wherein the electroencephalogram feature map is constructed according to electroencephalogram feature data extracted from electroencephalogram feature data of each of a plurality of electroencephalogram channels of the target object, electroencephalogram feature nodes of the electroencephalogram feature map represent the electroencephalogram feature data of the electroencephalogram channels, and electroencephalogram feature side weights of the electroencephalogram feature map represent the degree of association between the electroencephalogram feature data corresponding to each of the two connected electroencephalogram feature nodes; Fusing the physiological coding data and the electroencephalogram coding data to obtain multi-mode fusion coding data; obtaining a state classification result and an emotion classification result of the target object according to the multi-mode fusion coding data; the step of obtaining the state classification result and the emotion classification result of the target object according to the multi-mode fusion coding data comprises the following steps: according to the multi-mode fusion coding data, first multi-mode fusion processing data based on a first processing sequence and second multi-mode fusion processing data based on a second processing sequence are obtained; Connecting the first multi-mode fusion processing data with the second multi-mode fusion processing data to obtain multi-mode fusion connection data; And obtaining a state classification result and an emotion classification result of the target object according to the multi-mode fusion connection data.
  2. 2. The method of claim 1, wherein the fusing the physiological encoded data and the electroencephalogram encoded data to obtain multi-modal fused encoded data comprises: Cross attention fusion is carried out on the physiological coding data and the electroencephalogram coding data to obtain electroencephalogram physiological intermediate fusion data; And fusing the electroencephalogram intermediate fusion data and the electroencephalogram coding data according to the characteristic dimension of the electroencephalogram intermediate fusion data and the characteristic dimension of the electroencephalogram coding data to obtain the multi-mode fusion coding data.
  3. 3. The method according to claim 2, wherein the cross-attention fusion of the physiological encoded data and the electroencephalogram encoded data to obtain electroencephalogram intermediate fusion data comprises: Performing cross attention fusion on a query matrix determined according to the electroencephalogram coding data and a key matrix determined according to the physiological coding data to obtain a cross attention weight matrix; Weighting and summing a value matrix determined according to the physiological coding data by using the cross attention weight matrix to obtain the context information of the query matrix; and obtaining the electroencephalogram intermediate fusion data according to the context information of the query matrix and the electroencephalogram coding data.
  4. 4. The method according to claim 2 or 3, wherein the fusing the electroencephalogram intermediate fusion data and the electroencephalogram encoding data according to the feature dimension of the electroencephalogram intermediate fusion data and the feature dimension of the electroencephalogram encoding data to obtain the multimodal fusion encoding data includes: Obtaining gating data based on characteristic dimensions according to the characteristic dimensions of the electroencephalogram intermediate fusion data and the characteristic dimensions of the electroencephalogram coding data, wherein the gating data represents components of the electroencephalogram intermediate fusion data and the electroencephalogram coding data used for fusion in the corresponding characteristic dimensions; And carrying out weighted fusion on the electroencephalogram physiological intermediate fusion data and the electroencephalogram coding data by using the gating data to obtain the multi-mode fusion coding data.
  5. 5. The method according to claim 4, wherein the obtaining gating data based on the feature dimension according to the feature dimension of the electroencephalogram intermediate fusion data and the feature dimension of the electroencephalogram encoding data includes: splicing according to the characteristic dimension of the electroencephalogram intermediate fusion data and the characteristic dimension of the electroencephalogram coding data to obtain electroencephalogram splicing data; And performing linear transformation on the electroencephalogram physiological splicing data to obtain the gating data.
  6. 6. A method according to any one of claims 1 to 3, further comprising: determining a first number of target physiological feature edge weights according to the ordering of at least one physiological feature edge weight; Obtaining the physiological characteristic map according to the target physiological characteristic edge weight and the physiological characteristic node corresponding to the target physiological characteristic edge weight, and/or Determining the brain electrical characteristic edge weight between the brain electrical characteristic data corresponding to any two brain electrical channels according to the frequency domain brain electrical characteristic data and the map domain brain electrical characteristic data included in the brain electrical characteristic data, wherein the frequency domain brain electrical characteristic data represents the energy distribution of the brain electrical characteristic data in a plurality of preset physiological frequency bands, the map domain brain electrical characteristic data represents the connectivity of brain electrical sampling moments corresponding to a plurality of brain electrical sampling values of the brain electrical characteristic data, and the connectivity of the brain electrical sampling moments is the number of the connectable other brain electrical sampling moments determined according to the visibility relation in the brain electrical channels; determining a second number of target electroencephalogram feature edge weights according to the ordering of at least one electroencephalogram feature edge weight; and obtaining the electroencephalogram characteristic diagram according to the target electroencephalogram characteristic edge weight and the electroencephalogram characteristic nodes related to the target electroencephalogram characteristic edge weight.
  7. 7. A method for training a state emotion classification model, comprising: Encoding a sample physiological characteristic map to obtain sample physiological encoding data, wherein the sample physiological characteristic map is constructed according to sample physiological characteristic data extracted from sample physiological signal data of a sample target object in each of a plurality of sample physiological channels, sample physiological characteristic nodes in the sample physiological characteristic map represent the sample physiological characteristic data of the sample physiological channels, and sample physiological characteristic edge weights of the sample physiological characteristic map represent the degree of association between the sample physiological characteristic data corresponding to each of the two connected sample physiological characteristic nodes; Encoding a sample electroencephalogram feature map to obtain sample electroencephalogram encoded data, wherein the sample electroencephalogram feature map is constructed according to sample electroencephalogram feature data extracted from sample electroencephalogram feature data of each of a plurality of sample electroencephalogram channels of the sample target object, sample electroencephalogram feature nodes of the sample electroencephalogram feature map represent the sample electroencephalogram feature data of the sample electroencephalogram channels, and sample electroencephalogram feature edge weights of the sample electroencephalogram feature map represent the degree of association between the sample electroencephalogram feature data corresponding to each of the two connected sample electroencephalogram feature nodes; fusing the sample physiological coding data and the sample electroencephalogram coding data to obtain sample multi-mode fusion coding data; Obtaining a sample state classification result and a sample emotion classification result of the sample target object according to the sample multi-mode fusion coding data; Based on a target loss function, training a deep learning model according to the sample state classification result, the sample emotion classification result, the sample multi-mode fusion coding data, the state emotion classification characteristic, the sample physiological coding data, the sample electroencephalogram coding data and the sample label, wherein the state emotion classification characteristic is obtained according to the sample multi-mode fusion coding data corresponding to the same sample label.
  8. 8. The method of claim 7, wherein training a deep learning model based on the target loss function based on the sample state classification result, the sample emotion classification result, the sample multi-modal fusion encoded data, state emotion classification features, the sample physiological encoded data, the sample electroencephalogram encoded data, and sample labels comprises: Obtaining a target loss value based on the sample state classification result, the sample emotion classification result, the sample multi-modal fusion coding data, the state emotion classification feature and the sample label, wherein the target loss function is determined based on at least one of a state emotion center contrast loss item or a cross-modal mutual information regular loss item, and a state classification cross entropy loss item and an emotion classification cross entropy loss item, the target loss value is determined based on at least one of a state emotion center contrast loss value of the state emotion center contrast loss item or a cross-modal mutual information regular loss value of the cross-modal mutual information regular loss item, and a state classification cross entropy penalty value of the state classification cross entropy penalty term and an emotion classification cross entropy penalty value of the emotion classification cross entropy penalty term, the state classification cross entropy penalty value being determined based on the state classification cross entropy penalty term, the emotion classification cross entropy penalty value being determined based on the emotion classification cross entropy penalty term, the sample emotion classification result being determined based on the sample emotion classification result, the state emotion center contrast penalty value being determined based on a state emotion center contrast penalty term, the sample multimodal fusion encoded data and the state emotion classification feature being determined based on a cross modality cross information regular penalty term, the cross modality cross information regular penalty value being determined based on the sample physiological encoded data and the sample electroencephalogram encoded data; And training a deep learning model according to the target loss value.
  9. 9. A system for classifying a state emotion, said system comprising: the physiological signal data acquisition module is configured to acquire physiological signal data of a target object; The electroencephalogram data acquisition module is configured to acquire electroencephalogram data of the target object; An electronic device, comprising: One or more processors; A memory for storing one or more programs, Wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of any of claims 1-8.

Description

State emotion classification method, system and model training method Technical Field The invention relates to the technical field of state emotion classification, in particular to a state emotion classification method, a state emotion classification system and a model training method. Background With the rapid development of intelligent cabs, wearable devices and human-vehicle interaction systems, real-time cognitive state monitoring of drivers becomes a key link for improving safety and human-vehicle interaction experience. Especially for long-distance transportation industry, the cognitive state (such as concentration, cognitive load, etc.) of the driver directly influences the decision accuracy and response time delay. However, the method for evaluating the cognitive state of the driver in the related art has the problems of strong hysteresis, difficulty in quantification and large individual difference. Disclosure of Invention In view of this, the embodiment of the invention provides a state emotion classification method, a system and a model training method. An aspect of the embodiment of the invention provides a state emotion classification method, which comprises the steps of encoding a physiological feature map to obtain physiological encoding data, wherein the physiological feature map is constructed according to physiological feature data extracted from target objects in physiological signal data of a plurality of physiological channels, physiological feature nodes in the physiological feature map represent the physiological feature data of the physiological channels, physiological feature edge weight of the physiological feature map represents the association degree between the physiological feature data corresponding to each of the two connected physiological feature nodes, encoding the electroencephalogram feature map to obtain electroencephalogram encoding data, wherein the electroencephalogram feature map is constructed according to electroencephalogram feature data extracted from the target objects in electroencephalogram data of the plurality of electroencephalogram channels, the electroencephalogram feature nodes of the electroencephalogram feature map represent the electroencephalogram feature data of the electroencephalogram channels, the electroencephalogram feature edge weight of the electroencephalogram feature map represents the electroencephalogram feature data corresponding to the connected electroencephalogram feature nodes, encoding data of the target objects, and the encoding data of the target object classification result fusion, and the encoding result fusion. The invention provides a training method of a state emotion classification model, which comprises the steps of encoding a sample physiological feature map to obtain sample physiological encoding data, wherein the sample physiological feature map is constructed according to sample physiological feature data extracted from sample physiological signal data of a sample target object in each of a plurality of sample physiological channels, sample physiological feature nodes in the sample physiological feature map represent the sample physiological feature data of the sample physiological channel, and sample physiological feature edge weights of the sample physiological feature map represent the degree of correlation between the sample physiological feature data corresponding to each of the two connected sample physiological feature nodes; encoding a sample electroencephalogram to obtain sample electroencephalogram encoded data, wherein the sample electroencephalogram is constructed according to sample electroencephalogram characteristic data extracted from sample electroencephalogram data of each of a plurality of sample electroencephalogram channels of the sample target object, sample electroencephalogram characteristic nodes of the sample electroencephalogram characteristic graph represent the sample electroencephalogram characteristic data of the sample electroencephalogram channels, sample electroencephalogram characteristic edge weight of the sample electroencephalogram characteristic graph represents the degree of correlation between the sample electroencephalogram characteristic data corresponding to each of two connected sample electroencephalogram characteristic nodes, fusion is carried out on the sample physiological encoded data and the sample electroencephalogram encoded data to obtain sample multi-mode fusion encoded data, sample state classification results and sample emotion classification results of the sample target object are obtained according to the sample multi-mode fusion encoded data, and sample emotion classification results are based on a target loss function and according to the sample state classification results, the sample emotion classification results, the sample multi-mode fusion coding data, the state emotion type characteristics, the sample physiological coding data, the sample electroenceph