Search

CN-122008156-A - Multi-mode data fusion-based upper limb exoskeleton self-adaptive control method

CN122008156ACN 122008156 ACN122008156 ACN 122008156ACN-122008156-A

Abstract

The invention discloses an upper limb exoskeleton self-adaptive control method based on multi-mode data fusion, and relates to the technical field of upper limb exoskeleton control. The upper limb exoskeleton self-adaptive control method based on multi-modal data fusion comprises the steps of obtaining multi-modal data such as user electromyographic signals, joint movement signals and motor output torque time sequence data, generating a man-machine interaction moment observation set through an upper limb exoskeleton inverse power model by combining the joint movement signals and the motor output torque time sequence data, carrying out self-adaptive fusion on the electromyographic signals and the joint movement signals based on the observation set to obtain a fusion weight set, predicting a user expected command, carrying out multi-target collaborative optimization on the observation set and the expected command to generate a control parameter set, and carrying out driving control processing on the upper limb exoskeleton according to the control command by combining an admittance control model to generate a joint moment control command.

Inventors

  • GUO CHAO
  • AN TONGHUI
  • LIU DIE
  • Xia Zegang

Assignees

  • 贵州交通职业大学

Dates

Publication Date
20260512
Application Date
20260413

Claims (10)

  1. 1. The upper limb exoskeleton self-adaptive control method based on multi-mode data fusion is characterized by comprising the following steps of: Acquiring multi-modal data of the upper limb exoskeleton, wherein the multi-modal data comprises user electromyographic signals, joint motion signals and motor output torque time sequence data; based on the joint motion signals and the motor output torque time sequence data, performing reverse power recursion processing through a pre-established upper limb exoskeleton reverse power model to generate a man-machine interaction moment observation set of the upper limb exoskeleton; Based on the man-machine interaction moment observation set, carrying out self-adaptive fusion processing on the user electromyographic signals and the joint motion signals to generate a fusion weight set of the upper limb exoskeleton, and carrying out predictive analysis to obtain a user expected instruction of the upper limb exoskeleton; Performing multi-target collaborative optimization processing on the man-machine interaction moment observation set and the user expected instruction to generate a control parameter set of the upper limb exoskeleton, wherein the control parameter set comprises an impedance control set and an auxiliary ratio set; Based on the control parameter set, performing torque mapping processing on a user expected command through an admittance control model to generate a joint torque control command of the upper limb exoskeleton; and driving and controlling the upper limb exoskeleton based on the joint moment control instruction.
  2. 2. The method for adaptively controlling the upper limb exoskeleton based on the multi-modal data fusion according to claim 1, wherein the specific steps of generating the man-machine interaction moment observation set of the upper limb exoskeleton are as follows: Extracting a joint angular acceleration time sequence set of the upper limb exoskeleton based on the joint motion signal; Inputting the joint movement signals and the joint angular acceleration time sequence set into the upper limb exoskeleton inverse power model by combining the structural parameter sets stored in a database, and analyzing a reference driving joint moment time sequence set of the upper limb exoskeleton; performing loss compensation processing on the motor output torque time sequence data to generate an actual output torque time sequence set of the upper limb exoskeleton; And performing cooperative integration processing based on the reference driving joint moment time sequence set and the actual output torque time sequence set to generate a man-machine interaction moment observation set of the upper limb exoskeleton.
  3. 3. The method for adaptively controlling the upper limb exoskeleton based on the multi-modal data fusion according to claim 2, wherein the specific step of analyzing the reference driving joint moment time sequence set of the upper limb exoskeleton is as follows: Performing time sequence synchronization processing on the joint motion signals, the joint angular acceleration time sequence set and the structural parameter set, and marking the joint motion signals, the joint angular acceleration time sequence set and the structural parameter set as joint input; based on the upper limb exoskeleton inverse power model, performing layered moment calculation and collaborative synthesis processing on the combined input to generate a joint moment-dividing time sequence set of the upper limb exoskeleton; And carrying out synchronous superposition processing on the joint moment-dividing time sequence set to generate a reference driving joint moment time sequence set of the upper limb exoskeleton.
  4. 4. The method for adaptively controlling the upper limb exoskeleton based on the multi-modal data fusion according to claim 1, wherein the specific steps of generating the fusion weight set of the upper limb exoskeleton are as follows: based on the man-machine interaction moment observation set, constructing an interaction moment feature subset of the upper limb exoskeleton; Based on the user electromyographic signals and the joint movement signals, respectively extracting the electromyographic activation confidence and the joint movement confidence of the upper limb exoskeleton; Performing self-adaptive distribution processing on the interaction moment feature subset, the myoelectric activation confidence coefficient and the joint movement confidence coefficient to generate an initial weight set of the upper limb exoskeleton; And performing optimization inhibition treatment on the initial weight set to generate a fusion weight set of the upper limb exoskeleton.
  5. 5. The method for adaptively controlling the upper limb exoskeleton based on the multi-modal data fusion according to claim 4, wherein the specific step of generating the initial weight set of the upper limb exoskeleton is as follows: inputting the interaction moment feature subset into a preset nonlinear mapping model, and extracting a moment regulation factor set of the upper limb exoskeleton; And based on the moment regulation factors, performing collaborative mapping and constraint processing on the myoelectric activation confidence coefficient and the joint movement confidence coefficient to generate an initial weight set of the upper limb exoskeleton.
  6. 6. The adaptive control method for the upper limb exoskeleton based on the multi-modal data fusion according to claim 5, wherein the nonlinear mapping model comprises an input layer, a feature fusion layer and an activation output layer, and the specific steps of extracting the moment regulation factor set of the upper limb exoskeleton are as follows: receiving the interaction moment feature subset in the input layer and preprocessing the interaction moment feature subset; Performing attention association processing on the preprocessed interaction moment feature subsets in the feature fusion layer to generate moment aggregation feature vectors of the upper limb exoskeleton; and in the activation output layer, nonlinear transformation processing is carried out on the moment aggregation feature vector, and a moment regulation factor set of the upper limb exoskeleton is output.
  7. 7. The adaptive control method for the upper limb exoskeleton based on the multi-modal data fusion according to claim 1, wherein the specific steps of obtaining the user desired instruction of the upper limb exoskeleton are as follows: Performing intentional calculation processing on the user electromyographic signals and the articulation signals to generate a user intentional movement feature set of the upper limb exoskeleton; based on the fusion weight set, carrying out modal cooperative processing on the user intention movement feature set and the joint movement signal to obtain a fusion movement feature set of the upper limb exoskeleton; And performing time sequence trend fitting and predictive smoothing on the fusion motion feature set to obtain a user expected instruction of the upper limb exoskeleton.
  8. 8. The method for adaptively controlling the upper limb exoskeleton based on the multi-modal data fusion according to claim 1, wherein the specific steps of generating the control parameter set of the upper limb exoskeleton are as follows: reading the joint movement signals, and extracting a human-computer cooperative characteristic set of the upper limb exoskeleton by combining the human-computer interactive moment observation set and the user expected instruction; Performing self-adaptive analysis optimizing verification processing on the man-machine cooperative characteristic set to generate an impedance control set of the upper limb exoskeleton; and carrying out demand adaptation processing on the man-machine cooperative characteristic set to generate an auxiliary ratio set of the upper limb exoskeleton.
  9. 9. The adaptive control method for the upper limb exoskeleton based on the multi-modal data fusion according to claim 8, wherein the specific steps of generating the impedance control set of the upper limb exoskeleton are as follows: Setting an optimization variable set, and constructing a multi-objective optimization function by combining the man-machine cooperative characteristic set; Performing single-target conversion analysis processing on the multi-target optimization function to generate an initial impedance control set of the upper limb exoskeleton; And performing calibration processing on the initial impedance control set based on preset constraint conditions to generate an impedance control set of the upper limb exoskeleton.
  10. 10. The adaptive control method for the upper limb exoskeleton based on multi-modal data fusion according to claim 1, wherein the specific steps of generating the joint moment control command of the upper limb exoskeleton are as follows: inputting the user expected instruction and the impedance control set into an admittance control model, and extracting an initial joint moment control instruction of the upper limb exoskeleton; And reading the man-machine interaction moment observation set, and combining the auxiliary ratio set to correct the initial joint moment control instruction to obtain the joint moment control instruction of the upper limb exoskeleton.

Description

Multi-mode data fusion-based upper limb exoskeleton self-adaptive control method Technical Field The invention relates to the technical field of upper limb exoskeleton control, in particular to an upper limb exoskeleton self-adaptive control method based on multi-mode data fusion. Background With the continuous progress of robot technology, sensor technology and man-machine interaction technology, the upper limb exoskeleton system shows wide application prospects in the fields of medical rehabilitation, strength enhancement, operation assistance and the like, and in order to realize natural, efficient and ergonomic man-machine collaborative operation, the key point is that the exoskeleton system can accurately understand user intention and provide compliant assistance matched with the exoskeleton system. The human body upper limb movement has high flexibility and complexity, the accurate perception of movement intention and the self-adaptive control of the exoskeleton are the cores for improving the practicability and user experience of equipment, in recent years, the multi-mode data fusion technology gradually becomes a research hot spot in the field, the movement intention of the human body can be reflected more comprehensively through multi-source information, and a richer input dimension is provided for exoskeleton control. The intelligent wearable exoskeleton self-adaptive auxiliary regulation and control method and the exoskeleton system disclosed in the patent application with the publication number of CN120382467A in the prior art comprise the steps of obtaining a motion data sample of human walking, constructing a human motion model based on a human-machine closed-chain motion model according to the motion data sample, collecting multi-mode data of a target user when the exoskeleton is worn in real time, calling the human motion model and the human-machine closed-chain motion model, generating a human-machine motion cooperative exoskeleton control instruction according to the multi-mode data, and dynamically adjusting motion control parameters of the exoskeleton according to the control instruction. According to the method, an accurate human body movement model is built based on a human-machine closed-chain movement model, an exoskeleton control instruction of human-machine movement coordination is generated by combining the model, exoskeleton movement control parameters are dynamically adjusted according to the instruction, body type differences, muscle strength levels and movement habits of different users can be accurately matched, and wearing comfort and safety are effectively improved. Based on the above scheme, the limitations of the prior art at least comprise the following problems that a special power model is not built aiming at the motion characteristics of the upper limbs, multi-source heterogeneous data such as myoelectricity, joint motion, motor torque and the like are not subjected to deep fusion and weight self-adaptive distribution, so that human-computer interaction torque is difficult to accurately observe, hysteresis and deviation exist in user motion intention recognition, a control instruction is expected to be disjointed with actual motion of a user, meanwhile, an impedance control set and an auxiliary ratio set which are matched with the motion of the upper limbs are not generated through multi-target cooperative optimization in the prior art, and torque accurate mapping is not realized by combining an admittance control model, so that exoskeleton auxiliary torque is not matched with the real-time interaction torque of the user, the motion requirement is not matched, the cooperative smoothness of a man-machine is insufficient, and fine treatments such as loss compensation and layering torque calculation aiming at the motion of the upper limbs are not available, and the control precision and individual suitability are poor, and the self-adaptive auxiliary requirement under the complex upper limbs and fine motion scene is difficult to be met. Disclosure of Invention Aiming at the defects of the prior art, the invention provides an upper limb exoskeleton self-adaptive control method based on multi-mode data fusion, which solves the problems that the prior art lacks of deep fusion of an upper limb special model and multiple data, has low control precision and poor adaptation, and is difficult to meet the fine auxiliary requirement of the upper limb. The upper limb exoskeleton adaptive control method based on multi-mode data fusion comprises the following steps of obtaining multi-mode data of an upper limb exoskeleton, wherein the multi-mode data comprise user electromyographic signals, joint motion signals and motor output torque time sequence data, performing reverse power recurrence processing on an upper limb exoskeleton inverse power model established in advance based on the joint motion signals and the motor output torque time sequence data to generate a man-machine interac