Search

CN-122024133-A - Kinect-based running machine exercise gait information acquisition and analysis method and system

CN122024133ACN 122024133 ACN122024133 ACN 122024133ACN-122024133-A

Abstract

The invention relates to the technical field of medical rehabilitation and exercise analysis, in particular to a running machine exercise gait information acquisition and analysis method and system based on Kinect, comprising a multi-view synchronous acquisition unit, a multi-view synchronous acquisition unit and a multi-view synchronous acquisition unit, wherein the multi-view synchronous acquisition unit is used for synchronously capturing RGB image sequences and depth image sequences of a subject on a running machine from different directions through at least two depth cameras connected in a hardware synchronous mode; and the high-robustness posture estimation unit is used for receiving the RGB image sequences of the depth cameras and calling an artificial intelligent posture estimation model optimized by the gait scene data of the running machine. According to the invention, the built-in black box skeleton data is abandoned, a more advanced AI model is applied from an original image layer, redundant observation information is obtained from multi-view fusion, and strict biomechanical constraint is applied, so that systematic deviation and random noise of single view and single model are fundamentally corrected, the output three-dimensional joint track is smooth and stable, and the joint angle calculation precision meets the primary requirement of clinical quantitative analysis.

Inventors

  • XIONG KAIFENG

Assignees

  • 浙江畅跑体育用品有限公司

Dates

Publication Date
20260512
Application Date
20260123

Claims (10)

  1. 1. Treadmill motion gait information acquisition analysis system based on Kinect, its characterized in that includes: The multi-view synchronous acquisition unit is used for synchronously capturing RGB image sequences and depth image sequences of a subject on the running machine from different directions through at least two depth cameras connected in a hardware synchronous mode; The high-robustness posture estimation unit is used for receiving the RGB image sequences of the depth cameras, calling an artificial intelligent posture estimation model optimized by the gait scene data of the running machine, and calculating coordinate information of a plurality of human body key points including hand and foot details frame by frame; The multi-mode data intelligent fusion and three-dimensional reconstruction unit is used for receiving the key point coordinate information and the corresponding depth image sequence, and reconstructing a high-precision three-dimensional human skeleton motion sequence through a cross-view data association and embedded human biomechanical constraint optimization algorithm; The refined gait feature analysis unit is used for calculating macroscopic gait parameters and fine kinematic parameters of foot and pelvis areas based on the three-dimensional human skeleton motion sequence; And the interaction and visualization unit is used for realizing real-time visualization of data and generation of an evaluation report.
  2. 2. The system for collecting and analyzing the gait information of the exercise of the treadmill based on Kinect as set forth in claim 1, wherein at least one depth camera is arranged in a sagittal view angle of the front of the treadmill and the other depth camera is arranged in a coronal view angle of the side of the treadmill in the multi-view synchronous collecting unit, and the depth camera is Azure Kinect DK.
  3. 3. The system for collecting and analyzing the motion gait information of the treadmill based on Kinect as set forth in claim 1, wherein the artificial intelligent posture estimation model in the high-robustness posture estimation unit is MEDIAPIPE BLAZEPOSE, and a time sequence smoothing module is integrated with the model, and the time sequence smoothing module utilizes the estimation results of the current frame and the preceding multiframe to enhance the time consistency of the output posture.
  4. 4. The Kinect-based treadmill motion gait information acquisition and analysis system according to claim 1, wherein the multi-modal data intelligent fusion and three-dimensional reconstruction unit performs the following operations: For each target node, calculating the confidence weight of each view angle observation value according to the neighborhood depth gradient information of each target node in each camera view angle depth map; Weighting and fusing the observation values of all the visual angles according to the confidence weights to obtain a three-dimensional initial observation value of the joint point; Constructing an optimization objective function, wherein a first term of the function is used for minimizing the difference between the final coordinates of the joint points and the three-dimensional initial observed value, and a second term is a biomechanical constraint term based on personalized bone length and is used for constraining the distance between adjacent joint points; And solving the optimization objective function, and outputting a final three-dimensional skeleton meeting the constraint.
  5. 5. The Kinect-based treadmill motion gait information acquisition and analysis system as set forth in claim 4, wherein the biomechanical constraint term is in the specific form of: Wherein the method comprises the steps of And For the neighboring joint point coordinates to be optimized, To obtain the personalized length of the length of bone by static calibration, Is an adjustable constraint weight factor.
  6. 6. The system for collecting and analyzing the motion gait information of the treadmill based on Kinect according to claim 1, wherein the fine kinematics parameters calculated by the fine gait feature analysis unit comprise: foot-to-ground contact sub-stage timing parameters based on automatic division of heel and ball key point heights and speed trajectories; the foot deflection angle calculated at the moment of heel contact is calculated as follows: Wherein the method comprises the steps of For the coordinates of the second metatarsal head on the horizontal plane X axis, In the horizontal plane X for calcaneus The coordinates of the axes are used to determine, For the second metatarsal head in the horizontal plane Z axis, For the Z-axis coordinates of the calcaneus bone in the horizontal plane, For a two-dimensional arctangent function, a vector is calculated An angle with the X axis (where X axis is Z axis); index of inclination angle of pelvis on coronal plane, rotation angle on horizontal plane and bilateral symmetry thereof Wherein the method comprises the steps of Is the foot deflection angle of the left foot, Is the foot deflection angle of the right foot, Is absolute value And (5) calculating to eliminate symbol influence.
  7. 7. A Kinect-based treadmill gait information acquisition analysis method based on the system of any one of claims 1 to 6, comprising the steps of: S01, system calibration and personalized bone parameter measurement; S02, controlling the running machine to run at a constant speed, and synchronously triggering all depth cameras to start data acquisition; s03, inputting the RGB images acquired by each camera into a high-robustness attitude estimation unit to obtain a plurality of groups of two-dimensional key point sequences; S04, inputting a plurality of groups of key point sequences and depth image sequences into a multi-mode data intelligent fusion and three-dimensional reconstruction unit, and performing frame-by-frame fusion optimization to generate a high-precision three-dimensional skeleton sequence; s05, filtering the three-dimensional skeleton sequence and detecting gait events, and calculating macroscopic and fine gait parameters; S06, visualizing the result in real time and generating a gait analysis report.
  8. 8. The method for collecting and analyzing exercise gait information of a treadmill based on Kinect as recited in claim 7, wherein in step S01, the personalized bone parameter measurement is specifically that a subject is guided to stand still at the center of the treadmill, and the static bone lengths of the thigh, the calf, the foot, the upper arm and the forearm are automatically calculated through multi-view depth information, and the bone length calculation formula is as follows: Wherein the method comprises the steps of And Is the three-dimensional coordinates of the neighboring node points, Is that Three axis coordinate components in the world coordinate system, Is that Three axis coordinate components in the world coordinate system.
  9. 9. The method for collecting and analyzing exercise gait information of the treadmill based on Kinect as recited in claim 7, wherein in step S01, the fusion optimization process has different biomechanical constraint intensities for different types of joints, and the constraint intensity applied to the large joints of the hip, knee, shoulder and elbow is higher than the constraint intensity applied to the terminal key points of the fingers and toes.
  10. 10. The method for collecting and analyzing the gait information of the treadmill exercise based on Kinect as recited in claim 7, wherein in step S05, the sub-stage of foot contact includes a heel strike stage, a ball flat stage, a heel off stage and a toe off stage, and the automatic logic determination is performed by monitoring the height and vertical speed of the specific key point of the foot relative to the plane of the running belt of the treadmill in real time and combining the adaptive threshold.

Description

Kinect-based running machine exercise gait information acquisition and analysis method and system Technical Field The invention relates to the technical field of medical rehabilitation and exercise analysis, in particular to a running machine exercise gait information acquisition and analysis method and system based on Kinect. Background Gait analysis is a key technology for evaluating exercise functions and guiding rehabilitation therapy. The traditional scheme relies on professional equipment such as a Vicon optical system, a force measuring table and the like, has the limitations of high cost, special places and the like, and is difficult to popularize. The depth sensing technology (such as Microsoft Kinect) is an alternative scheme because of the advantages of real-time three-dimensional data acquisition, non-contact measurement, convenient deployment and the like, is particularly suitable for monitoring a fixed scene on a running machine, and can avoid the speed interference of free walking. However, the existing running machine gait analysis technology based on Kinect has the conditions of insufficient precision and stability of skeleton tracking, a built-in algorithm is not optimized for precise gait analysis, joint points are easy to shake, drift and even lose due to shielding and step frequency change in running machine movement, the position error of lower limb joint points often exceeds the tolerance range of biomechanical analysis, the rotation angle estimation error is obvious, clinical evaluation requirements cannot be met, in addition, the analysis dimension is limited, fine feature capture is lacking, the existing system can only output the coordinates of main large joints, key details such as foot rotation and pelvis rotation cannot be obtained, analysis parameters are limited, and fine features which are important for evaluating gait abnormality such as foot landing modes, ground clearance angles and the like are difficult to quantify. Disclosure of Invention The invention aims to provide a running machine movement gait information acquisition and analysis method and system based on Kinect, which have the advantages of low cost and high practicability, and solve the problems that the existing running machine gait analysis technology based on Kinect has insufficient accuracy and stability of skeleton tracking, and the existing system can only output the coordinates of main large joints and cannot acquire key details such as foot rotation, pelvis rotation and the like. In order to achieve the purpose, the invention provides the following technical scheme that the running machine exercise gait information acquisition and analysis system based on Kinect comprises: The multi-view synchronous acquisition unit is used for synchronously capturing RGB image sequences and depth image sequences of a subject on the running machine from different directions through at least two depth cameras connected in a hardware synchronous mode; The high-robustness posture estimation unit is used for receiving the RGB image sequences of the depth cameras, calling an artificial intelligent posture estimation model optimized by the gait scene data of the running machine, and calculating coordinate information of a plurality of human body key points including hand and foot details frame by frame; The multi-mode data intelligent fusion and three-dimensional reconstruction unit is used for receiving the key point coordinate information and the corresponding depth image sequence, and reconstructing a high-precision three-dimensional human skeleton motion sequence through a cross-view data association and embedded human biomechanical constraint optimization algorithm; The refined gait feature analysis unit is used for calculating macroscopic gait parameters and fine kinematic parameters of foot and pelvis areas based on the three-dimensional human skeleton motion sequence; And the interaction and visualization unit is used for realizing real-time visualization of data and generation of an evaluation report. Preferably, in the multi-view synchronous acquisition unit, at least one depth camera is arranged at a sagittal view angle right in front of the running machine, the other depth camera is arranged at a coronal view angle at the side of the running machine, and the depth camera is Azure Kinect DK. Preferably, the artificial intelligence gesture estimation model in the high-robustness gesture estimation unit is MEDIAPIPE BLAZEPOSE model, and the model integrates a time sequence smoothing module, and the time sequence smoothing module utilizes the estimation results of the current frame and the preamble multiframe to enhance the time consistency of the output gesture. Preferably, the multi-mode data intelligent fusion and three-dimensional reconstruction unit performs the following operations: For each target node, calculating the confidence weight of each view angle observation value according to the neighborhood depth g