Search

CN-121973156-A - Intelligent wearable robot system based on multi-mode large model and person-in-loop control and implementation method

CN121973156ACN 121973156 ACN121973156 ACN 121973156ACN-121973156-A

Abstract

The invention discloses an intelligent wearable robot system based on a multi-mode large model and human-in-loop control and an implementation method thereof, wherein an end-side-cloud cooperative framework is constructed, user requests are responded quickly through edge nodes, high-complexity reasoning and global optimization are performed by utilizing a cloud large model, model parameter exchange is performed by end-side equipment and the cloud through a secure communication protocol, data privacy protection is realized by combining federal learning and differential privacy, sensitive data localization processing and secure transmission are realized, global intelligence level is improved through cloud model updating, accuracy, control precision, interaction naturalness and man-machine cooperation safety of intention recognition can be improved remarkably in various application scenes such as medical rehabilitation, industrial assistance, military operation and daily life assistance, and the like, so that the intelligent wearable robot system has wide application prospect and practical value.

Inventors

  • XIA HAISHENG
  • CAI GUANJIE
  • LI ZHIJUN

Assignees

  • 同济大学

Dates

Publication Date
20260505
Application Date
20260316

Claims (10)

  1. 1. An intelligent wearable robot system based on a multi-mode large model and human-in-loop control is characterized in that an end-side-cloud cooperative framework is constructed, a user request is responded quickly through an edge node, high-complexity reasoning and global optimization are performed by utilizing a cloud large model, model parameter exchange is performed by end-side equipment and the cloud through a secure communication protocol, data privacy protection is realized by combining federal learning and differential privacy, The end side device comprises a wearable mechanism and an executing mechanism of the intelligent wearable robot and a multi-mode sensing module deployed on the intelligent wearable robot body; the cloud large model comprises a multi-mode large model processing module and a man-in-the-loop control module, and is used for bringing physiological states and subjective feedback of users into a control closed loop to realize cross-terminal data security collaborative training; the edge node comprises a feedback module for detecting the state of the executing mechanism in real time and feeding the state back to the multi-mode large model processing module and the man-in-the-loop control module to form closed loop control.
  2. 2. The intelligent wearable robot system based on the multi-mode big model and the human-in-loop control according to claim 1, wherein the multi-mode big model processing module is used for carrying out space-time feature fusion, user intention recognition and environment state perception on multi-mode data based on a deep neural network architecture, and realizing dynamic updating and personalized decision optimization through big model fine tuning and federal learning; And the human-in-loop control module constructs a hybrid control strategy comprising impedance control, model prediction control, iterative learning control and reinforcement learning on-line parameter adjustment, dynamically adjusts control parameters according to physiological feedback and operation requirements of users, and realizes human-computer interaction and motion assistance synchronous with the users.
  3. 3. The intelligent wearable robot system based on the multi-mode large model and the human-in-loop control, which is disclosed in claim 2, is characterized in that the multi-mode large model processing module adopts a lightweight Transformer or a multi-mode fusion neural network, deep fusion and correlation mining of different mode characteristics are realized by utilizing a multi-head attention mechanism, and low-power consumption and high-real-time deployment of local wearable equipment is realized by a method comprising parameter pruning, model quantization and distillation.
  4. 4. The intelligent wearable robot system based on the multi-mode big model and the human-in-loop control of claim 2, wherein the human-in-loop control module further comprises: The real-time physiological feedback unit is used for dynamically adjusting the auxiliary force or the control strategy according to indexes of heart rate, respiratory rate and muscle fatigue degree of the user; the user active input interface comprises a voice, gesture recognition or touch panel and is used for enabling a user to adjust the movement mode, speed, strength or auxiliary level independently; the self-adaptive control engine can update control parameters on line based on the reinforcement learning strategy, and optimize user experience and system energy efficiency.
  5. 5. The intelligent wearable robot system based on the multi-modal large model and the human-in-loop control of claim 1, wherein the multi-modal sensing module comprises: the high-definition RGB/depth camera is used for identifying the position, shape and movement track of an environmental object; the array microphone is used for receiving a user voice command and detecting the characteristic of an environmental sound source; An sEMG sensor for acquiring a user muscle electrical activity signal to predict movement intent; The physiological sensor is used for monitoring physiological states of users including heart rate, brain waves and blood flow characteristics in real time.
  6. 6. The intelligent wearable robot system based on the multi-mode large model and the human-in-loop control of claim 1, wherein the feedback module comprises a force sensor, a position sensor, an angle encoder and an inertial unit.
  7. 7. The intelligent wearable robot system based on the multi-mode large model and the on-loop control of people according to claim 1, wherein the actuator module adopts a high-power density brushless motor or a flexible driver, and reduces the overall weight through a carbon fiber composite material and a lightweight structural design.
  8. 8. The intelligent wearable robot system based on the multi-mode large model and the human-in-loop control of claim 1, wherein each module is provided with a hardware pluggable interface for hardware adaptation and expansion of different application scenes.
  9. 9. The implementation method of the intelligent wearable robot system based on the multi-mode large model and the human-in-loop control is characterized by comprising the following steps: s1, acquiring and preprocessing multi-mode data, namely acquiring physiological signals comprising vision, voice and IMU, sEMG, EEG, ECG through a multi-mode sensing module, and preprocessing including time sequence alignment, noise filtering, feature standardization and artifact removal; s2, multi-mode depth fusion and reasoning, namely inputting the data preprocessed in the step S1 into a multi-mode large model processing module, completing cross-mode feature extraction and semantic fusion, and outputting a prediction result of user intention, action mode and environmental state; Step S3, performing human-in-loop self-adaptive control, combining physiological feedback and subjective preference information of a user, and generating a final control strategy by adopting an impedance control, MPC, ILC and RL online parameter adjustment algorithm; s4, executing actions and safety control, wherein an actuator module drives a robot joint according to a control strategy to realize accurate action output and simultaneously detect overload and abnormality; and S5, feedback acquisition and closed-loop optimization, wherein the feedback module returns force transmission, position, speed and fatigue index information in real time, so that the multi-mode large model and a person can perform strategy optimization and online correction in the loop control module.
  10. 10. The application of the intelligent wearable robot system based on the multi-mode large model and the human-in-loop control is characterized by comprising the following steps of: The rehabilitation exoskeleton is used for gait training and rehabilitation assistance; The intelligent bionic artificial limb is used for accurately simulating the actions of natural limbs and controlling the grasping force; the industrial power-assisted exoskeleton is used for relieving muscle fatigue under high-strength operation and improving the operation efficiency; military load auxiliary exoskeleton is used for high-load marching and complex terrain operation.

Description

Intelligent wearable robot system based on multi-mode large model and person-in-loop control and implementation method Technical Field The invention belongs to the field of wearable robot technology and intelligent control, and particularly relates to an intelligent wearable robot system based on a multi-mode large model and human-in-loop control and an implementation method. Background The traditional wearable robot has been developed primarily in application scenes such as medical rehabilitation, industrial assistance and daily life assistance, but the problem that the prior art still generally relies on a single sensor or a simple algorithm to realize motion control exists, and the comprehensive performance and the intelligent level of the wearable robot are to be improved. Firstly, in terms of intent recognition, a traditional wearable robot mainly relies on an Inertial Measurement Unit (IMU), a single force sensor or a simple motion capturing means to acquire motion data, and the low-dimensional data input cannot comprehensively analyze the actual motion intention and psychological state of a user, particularly when a dynamic environment, a complex motion mode or noise interference exists, misjudgment, slow response or control deviation are extremely easy to generate, and accurate and personalized motion assistance is difficult to realize. In the aspect of environment perception capability, most of the existing systems only can capture mechanical state information, lack comprehensive perception and fusion analysis capability on visual sense, auditory sense, touch sense, force sense, physiological signals of users and other multi-source data, and cannot construct complete environment understanding and scene modeling, so that the adaptability of the robot is insufficient and interaction is not natural enough when the robot faces complex task scenes. In the aspect of man-machine interaction, a traditional control system generally adopts a unidirectional instruction driving mode, lacks a sensing and self-adaptive adjustment mechanism for real-time feedback information (such as fatigue degree, comfort and movement deviation) of a user, cannot flexibly optimize a control strategy according to different individual and task requirements, and is difficult to ensure user experience and safety. Finally, in the aspect of intelligence, as the traditional wearable robot lacks global reasoning capacity and multi-mode data understanding capacity based on a large-scale model, the whole algorithm is mostly preset control logic, high-precision identification and intelligent cooperative control on complex intention cannot be realized, and the application depth of the wearable robot in medical rehabilitation, complex industrial scenes and special operation tasks is limited. The advent of the Multi-modal large model (Multi-Modal Large Model, MM-LM) provides a new technological approach to the solution of the above-mentioned problems. The model can uniformly receive, fuse and process multi-mode information such as vision, voice, inertia signals, bioelectric signals, touch sense/force sense and the like, has higher-dimension environment understanding and user state reasoning capacity through deep neural network and cross-mode feature association learning, can more accurately identify user action intention, and can also predict task demands and adapt to different scenes. Meanwhile, the Human-in-the-loop control concept can enable subjective preference, real-time feedback and physiological state of a user to directly enter a control closed loop, so that the system can dynamically adjust and personally optimize according to the instant requirement of the user, and the naturalness and safety of Human-computer cooperation are greatly improved. Therefore, how to realize multi-mode sensing and depth fusion algorithm and user in-loop optimization control in the design of the wearable robot becomes a core technical problem and a research key point which are urgently needed to be solved in the prior art. Disclosure of Invention The invention aims to solve the technical problems of providing an intelligent wearable robot system based on a multi-mode large model and human-in-loop control and an implementation method thereof, and solving the problems of multi-mode sensing and depth fusion algorithm and user-in-loop optimal control in the prior art. The invention adopts the following technical scheme for solving the technical problems: An intelligent wearable robot system based on a multi-mode large model and human-in-loop control is used for constructing an end-side-cloud cooperative framework, rapidly responding to a user request through an edge node, carrying out high-complexity reasoning and global optimization by utilizing a cloud large model, carrying out model parameter exchange by end-side equipment and the cloud through a secure communication protocol, and realizing data privacy protection by combining federal learning and differe