CN-121971075-A - Method for decoding motion intention based on multi-mode information
Abstract
The invention belongs to the technical field of physiological electric signals, and particularly relates to a method for decoding motion intention based on multi-mode information, which comprises the steps of firstly combining video image signals and motion signals to obtain classification results of motion states, such as fine motion or non-fine motion, when processing multi-mode digital signals; if the motion is a fine motion, the motion intention decoding result is obtained by combining the video image signal and the electroencephalogram signal, and if the motion is a non-fine motion, the motion intention decoding result is obtained by combining the motion signal and the electroencephalogram signal. Compared with a flattened decoding result of only one model, the features are more focused, each classification model only focuses on core features required by current classification, and the more optimal model and features are adapted, so that the recognition accuracy is improved.
Inventors
- XU HONGLAI
- LIU TAO
- WEI JIAN
- ZHANG XIRUI
Assignees
- 博睿康技术(上海)股份有限公司
Dates
- Publication Date
- 20260505
- Application Date
- 20260401
Claims (14)
- 1. A method for decoding a motion intent based on multimodal information, comprising: Acquiring multi-mode digital signals including video image signals, motion signals and brain electrical signals; acquiring a classification result of the motion state by combining the video image signal and the motion signal, wherein the classification result comprises fine motion or non-fine motion; Obtaining decoding results of motion intent, i.e. If the motion is fine motion, acquiring a decoding result of the motion intention by combining the video image signal and the electroencephalogram signal; if the motion is not fine motion, the motion signal and the electroencephalogram signal are combined to obtain a decoding result of the motion intention.
- 2. The method of claim 1, further comprising constructing a multi-level classification model library; The multi-stage classification model library comprises a primary model library and a secondary model library, wherein the primary model library comprises at least one first classification model, the secondary model library comprises at least one second classification model and at least one third classification model, and the multi-stage classification model library comprises at least one second classification model and at least one third classification model, wherein the multi-stage classification model library comprises at least one first classification model and at least one second classification model The first classification model combines the video image signals and the motion signals to obtain classification results of the motion states; The second classification model obtains a classification result of the motion information based on the video image signal or the motion signal; the third classification model obtains a classification result of the neural information based on the electroencephalogram signal.
- 3. The method of claim 2, wherein, The classification result of the joint video image signal and the motion signal acquisition action state includes: The time synchronization of the motion information, namely, according to the sampling rate of the motion sensor and the sampling rate of the video image signal, obtaining a motion signal corresponding to each frame of video image; acquiring a first motion information feature based on the video image signal; acquiring a second motion information feature based on the motion signal; Taking the first motion information feature and the second motion information feature as input values of a first classification model, and taking a classification result of the action state as an output value of the first classification model; training a first classification model based on test data of both the video image signal and the motion signal; real-time data of both the video image signal and the motion signal are detected by using a first classification model to obtain a classification result of the motion state.
- 4. The method of claim 3, wherein the step of, Acquiring the first motion information feature based on the video image signal includes: Acquiring information data of each frame of video image signal: Based on the optical tracking of the marker points or markers, to correspond the identification sites to the video image sensor locations; Finding marked pixel points according to a graphic processing method to calculate attitude information, position information and speed information; Inputting the gesture information, the position information and the speed information into a first optimization model to optimize, so as to obtain a first motion information characteristic; Acquiring the second motion information feature based on the motion signal includes: acquiring information data of a motion sensor at each moment; acquiring attitude information through integration according to the angular velocity data; acquiring speed information according to the attitude information and the acceleration data; integrating the speed information to obtain position information; inputting the gesture information, the speed information and the position information into a first optimization model to optimize, so as to obtain a second motion information characteristic; The first optimization model comprises at least one of a Kalman filtering model and a particle filtering model.
- 5. The method of claim 2, wherein, The secondary model library is divided into at least two secondary model groups according to the classification result of the action state, each secondary model group comprises a second classification model and a third classification model, and For the second classification model and the third classification model in the same two-stage model group, the classification results of the two classification models have the same category number.
- 6. The method of claim 5, wherein the step of determining the position of the probe is performed, The classification result of the motion information based on the video image signal includes: Selecting a second-level model group, namely a first-level model group, according to the action category of the fine action, wherein the second-level model group comprises a second-type classification model I and a third-type classification model I; acquiring a third motion information feature based on the video image signal; Taking the third motion information characteristic as an input value of a first classification model of the second type, and taking the action category of the fine action as an output value of the first classification model of the second type; training a first classification model based on the test data of the video image corresponding to the fine action; And detecting real-time data of the video image signal by using the first classification model to obtain the classification probability of the motion information.
- 7. The method of claim 5, wherein the step of determining the position of the probe is performed, The classifying result of the motion information based on the motion signal comprises: selecting a second-level model group according to the action category of the non-fine action, namely, a second-level model group comprises a second-type classification model and a third-type classification model; acquiring a fourth motion information feature based on the motion signal; taking the fourth motion information characteristic as an input value of a second classification model II, and taking the action category of the non-fine action as an output value of the second classification model II; Training a second classification model based on test data of a motion signal corresponding to the non-fine motion; And detecting real-time data of the motion signal by using a second classification model II to obtain the classification probability of the motion information.
- 8. The method according to claim 6 or 7, wherein, The classification result of acquiring the nerve information based on the brain electrical signal comprises: Dividing the test data of the electroencephalogram signals to obtain the neural information characteristics of each segment in the test data of the electroencephalogram signals and the action category labels thereof; dividing the real-time data of the electroencephalogram signals in the same way to obtain the neural information characteristics of the real-time data fragments; Selecting a corresponding third classification model in the secondary model group according to the action category of the fine action or the action category of the non-fine action; inputting the neural information characteristics of the real-time data segment into a corresponding third classification model to obtain the classification probability of the current data segment and the classification probability of the previous data segment; Acquiring the conversion times between action categories according to the action category labels of the test data fragments; acquiring a transition probability matrix of each action category according to the transition times among the action categories; fusing the classification probability of the current data segment, the classification probability of the previous data segment and the transition probability matrix by using a second optimization model to obtain the classification probability of the optimized neural information, wherein the classification probability of the neural information is obtained by the second optimization model The second optimization model is configured as a hidden markov model.
- 9. The method of claim 8, wherein, The decoding result of the motion intention is obtained and is configured to respectively fuse the classification probability of the corresponding motion information and the classification probability of the nerve information according to the classification result of the motion state, and the decoding result comprises the following steps: setting identification frame Wherein First, the The target state, i.e. the action category of fine or non-fine actions, A category number representing a state; respectively acquiring corresponding classification probability distribution according to the action categories of the fine action or the non-fine action; Set the first Classification probability of individual information sources Wherein First, the Information source pair status And satisfies the classification probability of ; Converting classification probabilities into probability distribution functions, i.e. Wherein First, the Information source pair status Is a support degree of (2); Setting two information sources And Fusion results of (2) Wherein the collision coefficient is , And C is a subset of the recognition framework, Indicating the degree of conflict between different information sources, And representing the probability distribution after fusion, namely the decoding result of the motion intention.
- 10. A motion intent decoding system, comprising: A processor provided with a memory module thereon; The processor being adapted to run the steps of the method according to any of claims 1-9 to obtain a decoding result of the movement intent; the storage module is adapted to store a multi-modal digital signal and a multi-level classification model library to invoke a corresponding classification model when the method is run.
- 11. An implantable rehabilitation training device, comprising: the electrode sensor is implanted in the body and is suitable for acquiring intracranial brain electrical signals; The video sensor is positioned outside the body and is suitable for collecting video image signals; The motion sensor is positioned on the training peripheral and used for acquiring motion signals; the lower computer comprises an in-vivo machine and an in-vitro machine, and the in-vivo machine receives the brain electrical signals and transmits the brain electrical signals to the in-vitro machine; The upper computer provided with the motion intention decoding system according to claim 10 so as to obtain a decoding result of the motion intention; And training the peripheral equipment, and executing a training scheme according to the decoding result of the movement intention.
- 12. A computer device comprising a memory, a processor and a computer program stored on the memory, characterized in that the processor executes the computer program to carry out the steps of the method according to any one of claims 1-9.
- 13. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method according to any of claims 1-9.
- 14. A computer program product comprising a computer program which, when executed by a processor, implements the steps of the method according to any one of claims 1-9.
Description
Method for decoding motion intention based on multi-mode information Technical Field The invention belongs to the technical field of physiological electric signals, and particularly relates to a method for decoding exercise intention based on multi-mode information. Background The motion intention of the user is decoded in real time through the nerve signals, so that the method becomes a core technology for realizing the contents such as limb function reconstruction, peripheral control, nerve rehabilitation and the like. Because the single-mode signal is used for decoding the motion intention, the accuracy is reduced due to the fact that the single-mode signal is easily influenced by external interference, and in the real-time decoding technology of the motion intention, the decoding accuracy is often improved through multi-mode signal fusion. As patent CN117390575A discloses a motion intention recognition and rehabilitation method based on brain-computer interface and multi-mode interaction AI, which combines electroencephalogram information, electromyographic signals, motion signals, voice signals and motion signals thereof of normal motion and abnormal motion to train out a deep learning model capable of performing motion intention recognition and motion rehabilitation, thereby improving accuracy of motion intention recognition and motion rehabilitation effect. For another example, patent CN119026014a discloses a method for detecting exercise intention, a medical robot system, a storage medium and a product, which complement the advantages of an electroencephalogram signal and an electromyogram signal in a fusion manner, so that the problems that the strength of the electroencephalogram signal is weak, the electromyogram signal is susceptible to the influence of the muscle fatigue degree of a user, and the detection result is inaccurate are solved, and the accuracy of identifying exercise intention is improved. For another example, patent CN120448748B discloses a method and a device for configuring a motion intention decoding model based on electroencephalogram data, and in the training process, the motion posture data obtained by analyzing the electroencephalogram data and the video aspect is subjected to joint analysis, so that a model system can rapidly capture the association between a neural signal and a motion track, comprehensively capture the motion intention, and remarkably improve the recognition precision of the model for various complex motion modes. Obviously, compared with a single-mode signal, the multi-mode fusion of the electroencephalogram and other signals can effectively improve accuracy of motion intention decoding and system robustness, but the classification model becomes more complex, and optimal combination of different training actions is difficult to be simultaneously considered. Disclosure of Invention The invention provides a method for decoding motion intention based on multi-mode information, which aims to solve the problem of combination optimization of classification models. In order to solve the technical problems, the invention provides a method for decoding a motion intention based on multi-mode information, which comprises the steps of obtaining multi-mode digital signals, including video image signals, motion signals and brain signals, obtaining classification results of motion states, namely fine motion or non-fine motion, by combining the video image signals and the motion signals, obtaining decoding results of the motion intention, namely the motion intention by combining the video image signals and the brain signals if the motion is fine motion, and obtaining decoding results of the motion intention by combining the motion signals and the brain signals if the motion is non-fine motion. The method further comprises the steps of constructing a multi-stage classification model library, wherein the multi-stage classification model library comprises a first-stage model library and a second-stage model library, the first-stage model library comprises at least one first classification model, the second-stage model library comprises at least one second classification model and at least one third classification model, the first classification model is combined with a video image signal and a motion signal to obtain a classification result of an action state, the second classification model is used for obtaining a classification result of motion information based on the video image signal or the motion signal, and the third classification model is used for obtaining a classification result of nerve information based on an electroencephalogram signal. Further, the classification result of the joint video image signal and the motion signal obtaining action state comprises the steps of achieving time synchronization of motion information, namely obtaining a motion signal corresponding to each frame of video image according to the sampling rate of a motion sensor and the sampl