Search

CN-122020338-A - Wearable power assisting equipment intention recognition system, recognition method and power assisting method

CN122020338ACN 122020338 ACN122020338 ACN 122020338ACN-122020338-A

Abstract

The invention provides an intention recognition system, an intention recognition method and an intention recognition method of wearable assistance equipment, which comprise a flexible distributed sensing array, a processor and a processor, wherein the flexible distributed sensing array is configured to be distributed at a key part of an exoskeleton wearing system and used for collecting an original sensing signal reflecting muscle activity of a user, the processor is connected with the flexible distributed sensing array and is configured to extract a multi-dimensional feature vector from the original sensing signal, the multi-dimensional feature vector comprises at least two features selected from the following group, namely an average absolute value, a root mean square, a variance, a slope, a peak value and short-time Fourier transform energy, and the intention of the user is determined based on the multi-dimensional feature vector. The muscle deformation signals before macroscopic motions of joints can be captured, so that the motion intention of a user is predicted before the actual motion of the user occurs, the early response of assistance is realized, and the naturalness and fluency of man-machine interaction are greatly improved.

Inventors

  • ZHUO DA
  • XU JIALIANG
  • ZHU JIE
  • RUI YUEFENG

Assignees

  • 安乃达驱动技术(上海)股份有限公司
  • 安德博智能科技(上海)有限公司

Dates

Publication Date
20260512
Application Date
20260413

Claims (10)

  1. 1. A wearable power assisting apparatus intention recognition system, comprising: the flexible distributed sensing array is configured to be distributed at key parts of the wearable power assisting equipment and used for collecting original sensing signals reflecting muscle activities of users; A processor coupled to the flexible distributed sensing array, the processor configured to: extracting a multi-dimensional feature vector from the raw sensor signal, the multi-dimensional feature vector comprising at least two features selected from the group consisting of mean absolute value, root mean square, variance, slope, peak value, and short time fourier transform energy, and determining an action intent of a user based on the multi-dimensional feature vector.
  2. 2. The wearable power assisted device intent recognition system of claim 1, wherein the processor is configured to determine the user's action intent by: inputting the time sequence of the multidimensional feature vector into a prediction model to estimate the respective activation intensity of one or more muscles to form a corresponding activation intensity sequence of one or more muscles; And inputting the activation intensity sequences of the one or more muscles into a decision model to infer the action intent.
  3. 3. The wearable power assisted device intent recognition system of claim 1, wherein the processor, prior to extracting the multi-dimensional feature vector, is further configured to dynamically baseline compensate the raw sensing signal to eliminate low frequency drift.
  4. 4. The wearable power assisting device intention recognition system of claim 1, wherein the location covered by the flexible distributed sensing array comprises one or more of an upper limb muscle group, a lower limb muscle group, and a torso muscle group.
  5. 5. A wearable power assisting apparatus intention recognition method, characterized in that the wearable power assisting apparatus intention recognition system according to any one of claims 1 to 4 is employed, comprising the steps of: acquiring original sensing signals reflecting muscle activities of a user through a flexible distributed sensing array distributed at key parts of the wearable power assisting equipment; Extracting a multi-dimensional feature vector of the raw sensor signal, the multi-dimensional feature vector comprising at least two features selected from the group consisting of mean absolute value, root mean square, variance, slope, peak value, and short time fourier transform energy, and determining an action intent of a user based on the multi-dimensional feature vector.
  6. 6. The method for recognizing the intention of the wearable power assisting device according to claim 5, wherein the step of determining the action intention of the user specifically comprises: the method includes receiving a time series of multi-dimensional feature vectors, inputting the time series of multi-dimensional feature vectors into a predictive model to estimate respective activation strengths of one or more muscles to form a corresponding activation strength sequence of one or more muscles, and inputting the activation strength sequence of one or more muscles into a decision model to infer the action intent.
  7. 7. The wearable power assisting device intention recognition method of claim 5 or 6, further comprising dynamically baseline compensating the raw sensing signal prior to extracting the multi-dimensional feature vector.
  8. 8. A wearable power assisting apparatus assisting method, characterized in that the wearable power assisting apparatus intention recognition system according to any one of claims 1 to 4 is employed, the power assisting method comprising the steps of: Collecting an original sensing signal; performing dynamic baseline compensation and standardization; Calculating a multidimensional feature vector; Inputting a prediction model to estimate muscle activation intensity; The action intention is inferred by inputting a judging model; generating a control instruction for driving the actuator according to the determined action intention; The actuator executes the control instruction.
  9. 9. The method for assisting a wearable assistance apparatus according to claim 8, characterized in that, The flexible distributed sensing array arranged on the quadriceps, gluteus maximus and plantar region collects tiny deformation of the skin surface and plantar pressure distribution change caused by muscle contraction, and converts the tiny deformation and plantar pressure distribution change into continuous electric signals to form original sensing signals; performing dynamic baseline compensation and standardization; Calculating a multidimensional feature vector; if the prediction model judges that the activation intensity of quadriceps and gluteus maximus shows a continuously enhanced trend; When the judging model detects that the activation intensity of quadriceps and gluteus maximus simultaneously exceeds a preset threshold value and the sole pressure center has a backward movement trend, the action intention is judged to be rising.
  10. 10. The method for assisting a wearable assistance apparatus according to claim 8, characterized in that, The flexible distributed sensing array arranged on gluteus maximus and quadriceps is used for collecting skin surface micro deformation caused by muscle contraction and converting the skin surface micro deformation into continuous electric signals to form original sensing signals; performing dynamic baseline compensation and standardization; Calculating a multidimensional feature vector; if the prediction model judges the activation intensity sequences of gluteus maximus and quadriceps femoris, the periodic fluctuation consistent with the step frequency is presented; the judgment model identifies the real-time step frequency of the user by analyzing the periodic signal, and simultaneously, the stress intensity of the user is estimated in real time by monitoring the integral level of the peak amplitude and the average absolute value of the root mean square; and generating a power assisting torque command with periodicity according to the step frequency and the force strength decoded in real time.

Description

Wearable power assisting equipment intention recognition system, recognition method and power assisting method Technical Field The invention belongs to the technical field of wearable equipment and man-machine interaction, and particularly relates to an intention recognition system, a recognition method and a power assisting method of wearable power assisting equipment. Background Exoskeleton robots, as an advanced wearable device, can enhance body functions or provide athletic assistance to mobility impaired individuals. One of the core technologies is how to accurately and quickly sense the movement intention of the wearer. In the prior art, exoskeleton robot systems generally use sensors such as motor encoders, inertial measurement units, or plantar pressure plates to obtain motion information of users. However, these sensors mainly capture "result variables" after the movements of joint angles, limb postures and the like have occurred, so that natural hysteresis exists in control response, rapid assistance highly synchronized with the intention of the user is difficult to realize, and the smoothness of man-machine interaction is affected. In order to solve the problem of response delay, some schemes attempt to predict intent using bioelectric signals such as surface electromyographic signals or brain electrical signals. However, such signals are easily interfered by sweat, electromagnetic and other environmental factors in practical application, are extremely sensitive to the attachment positions of the electrodes, have large individual differences, and need frequent calibration, so that the robustness and the practicability are poor. Other schemes also attempt to integrate various sensor information, such as combining an inertial measurement unit and a myoelectric sensor, but the signal processing mode is often simpler, for example, the original or simply processed signal is directly input into a classification model to perform one-step judgment, and the mode is difficult to effectively distinguish the 'active muscle contraction' of a user from the 'passive interaction force' between equipment and a human body, and the problem of hysteresis of intention prediction cannot be fundamentally solved. Therefore, the prior art still has obvious defects in the aspects of perception capability, prediction prepositivity, signal interpretation depth and the like of the wearable power assisting equipment, and the accuracy and the safety of the motion assistance are limited. Disclosure of Invention Aiming at the defects in the prior art, the invention aims to provide an intention recognition system, an intention recognition method and an intention recognition method of wearable power assisting equipment. The invention provides a wearable power assisting equipment intention recognition system, which comprises: the flexible distributed sensing array is configured to be distributed at key parts of the wearable power assisting equipment and used for collecting original sensing signals reflecting muscle activities of users; A processor coupled to the flexible distributed sensing array, the processor configured to: extracting a multi-dimensional feature vector from the raw sensor signal, the multi-dimensional feature vector comprising at least two features selected from the group consisting of mean absolute value, root mean square, variance, slope, peak value, and short time fourier transform energy, and determining an action intent of a user based on the multi-dimensional feature vector. In a preferred embodiment, the processor is configured to determine the action intent of the user by: inputting the time sequence of the multidimensional feature vector into a prediction model to estimate the respective activation intensity of one or more muscles to form a corresponding activation intensity sequence of one or more muscles; And inputting the activation intensity sequences of the one or more muscles into a decision model to infer the action intent. In a preferred embodiment, the processor is further configured to dynamically baseline compensate the raw sensor signal to eliminate low frequency drift prior to extracting the multi-dimensional feature vector. In a preferred embodiment, the location covered by the flexible distributed sensing array comprises one or more of an upper limb muscle group, a lower limb muscle group, and a torso muscle group. The exoskeleton intention recognition method provided by the invention comprises the following steps of: acquiring original sensing signals reflecting muscle activities of a user through a flexible distributed sensing array distributed at key parts of the wearable power assisting equipment; Extracting a multi-dimensional feature vector of the raw sensor signal, the multi-dimensional feature vector comprising at least two features selected from the group consisting of mean absolute value, root mean square, variance, slope, peak value, and short time fourier transform energy, and