Search

CN-120853997-B - Motion state detection and evaluation method and system based on machine vision

CN120853997BCN 120853997 BCN120853997 BCN 120853997BCN-120853997-B

Abstract

The invention discloses a motion state detection and evaluation method and a system based on machine vision, which relate to the technical field of computer vision and comprise the steps of capturing a motion video stream in real time and acquiring an illumination entropy value and an illumination threshold value; defining an infrared spectrum and a visible spectrum according to an illumination threshold value, acquiring joint coordinate data and a face orientation angle, extracting physiological parameters based on the face orientation angle, simultaneously acquiring exercise intensity data by combining the change rate of the joint coordinates, constructing a physiological-exercise coupling model, outputting a physiological state matrix according to the physiological parameters and the exercise intensity data, predicting an exercise risk index and evaluating an exercise risk level according to the physiological state matrix to generate personalized advice, pushing an exercise state report in real time according to the exercise risk level and the personalized advice by a user terminal, and recording a feedback instruction. According to the invention, the closed-loop type exercise health monitoring system is formed by dynamically optimizing exercise safety and training scientificity, so that personalized exercise risk management is realized.

Inventors

  • Qin Gaoe
  • LI SHIJUN
  • LIU HUASHENG

Assignees

  • 江苏华郢智能技术有限公司

Dates

Publication Date
20260505
Application Date
20250715

Claims (8)

  1. 1. A motion state detection and evaluation method based on machine vision is characterized by comprising the following steps of, Capturing a motion video stream in real time, and acquiring an illumination entropy value and an illumination threshold value; defining an infrared spectrum and a visible spectrum according to the illumination threshold value, and acquiring joint coordinate data and a face orientation angle; Extracting physiological parameters based on the face orientation angle, and simultaneously acquiring exercise intensity data by combining the change rate of joint coordinates; constructing a physiological-motion coupling model, and outputting a physiological state matrix according to physiological parameters and motion intensity data; Predicting a sports risk index and evaluating the sports risk level according to the physiological state matrix to generate personalized advice; according to the exercise risk level and the personalized advice, the user terminal pushes an exercise state report in real time and records a feedback instruction; Defining an infrared spectrum and a visible spectrum according to the illumination threshold value, acquiring joint coordinate data and face orientation angles, comprising the following steps, Dynamically adjusting an image enhancement mode by comparing the illumination entropy value with an illumination threshold value to obtain a joint thermodynamic diagram; defining an infrared spectrum and a visible spectrum through the comparative analysis of the illumination entropy value and the illumination threshold value, and outputting a spectrum mode selection instruction; Based on a spectrum mode selection instruction, dynamically acquiring a spectrum entropy weight through physiological and optical dual-source credibility evaluation, performing joint thermodynamic diagram region segmentation and spectrum differentiation enhancement, and outputting a dual-spectrum enhanced image; Acquiring joint coordinate data and a face orientation angle by combining a joint thermodynamic diagram through a gesture-physiological combined characteristic extraction method based on the double-spectrum enhanced image and the spectrum entropy weight; the physiological-motion coupling model is constructed, and a physiological state matrix is output according to physiological parameters and motion intensity data, and the steps are as follows, Acquiring a double-flow neural network architecture by combining the time sequence characteristics of physiological parameters and the dynamic characteristics of motion intensity data with a heterogeneous characteristic extractor; Based on loading physiological parameters into a double-flow neural network architecture, obtaining physiological feature vectors through one-dimensional time sequence convolution operation, and simultaneously inputting motion intensity data into a bidirectional LSTM (least squares) for time sequence modeling to obtain the motion feature vectors; the physiological feature vector and the motion feature vector are subjected to feature cascading, and a dynamic coupling weight matrix is obtained based on a cross attention mechanism; double-flow feature fusion is carried out according to the dynamic coupling weight matrix, and a physiological-motion coupling model is constructed; And (3) performing index training on the physiological-motion coupling model by adopting a multi-layer neural network, and acquiring a physiological state matrix by combining physiological parameters and motion intensity data through the trained physiological-motion coupling model.
  2. 2. The machine vision-based motion state detection and assessment method according to claim 1, wherein said capturing a motion video stream in real time, obtaining an illumination entropy value and an illumination threshold value, comprises the steps of, Acquiring initial spectrum data according to a motion video stream, and acquiring gray image data according to a weighted gray conversion method; Based on the gray image data, acquiring a sub-block entropy matrix, and generating an illumination entropy according to a weighted average formula; and extracting a moving target area based on the illumination entropy value, and acquiring an illumination threshold value by combining a joint positioning feedback optimization method.
  3. 3. The machine vision-based motion state detection and assessment method according to claim 1, wherein said face orientation angle-based physiological parameter extraction is performed while the motion intensity data is obtained in combination with the rate of change of joint coordinates, as follows, Based on the face orientation angle, acquiring physiological parameters by combining dynamic ROI positioning with a sub-pixel level optical flow tracking method; based on the joint coordinate data, the change rate of the joint coordinate is obtained, and the motion intensity data is obtained through a physiological-mechanical coupling algorithm in combination with physiological parameters.
  4. 4. The machine vision-based exercise state detection and assessment method according to claim 1, wherein said predicting an exercise risk index and assessing an exercise risk level based on a physiological state matrix generates personalized advice by the steps of, Based on the physiological state matrix, the primary processing of the three-level evaluation method adopts a gradient lifting decision tree to predict the motion risk index; based on the motion risk index, the secondary treatment of the tertiary evaluation method divides the motion risk level through a fuzzy logic decision tree; According to the exercise risk level, the three-level processing of the three-level evaluation method matches the personalized scheme through a rule engine, synchronously fuses the user history data suggestion content and outputs personalized exercise suggestions.
  5. 5. The machine vision-based motion state detection and assessment method according to claim 4, wherein the user terminal pushes the motion state report in real time according to the motion risk level and the personalized advice, and records the feedback instruction, and the steps are as follows, Based on the exercise risk level and the personalized exercise advice, generating an exercise state report through an augmented reality engine and pushing the exercise state report to a user terminal in real time; and after receiving the motion state report, the user performs multi-mode interaction operation, records a feedback instruction, and synchronizes to a database to perform physiological-motion coupling model optimization.
  6. 6. A machine vision-based motion state detection and evaluation system based on the machine vision-based motion state detection and evaluation method according to any one of claims 1 to 5 is characterized by comprising, The data acquisition module is used for capturing the motion video stream in real time and acquiring an illumination entropy value and an illumination threshold value; The feature extraction module is used for defining an infrared spectrum and a visible spectrum according to the illumination threshold value and acquiring joint coordinate data and face orientation angles; the parameter tracking module is used for extracting physiological parameters based on the face orientation angle and simultaneously acquiring exercise intensity data by combining the change rate of the joint coordinate data; The model training module is used for constructing a physiological-motion coupling model and outputting a physiological state matrix according to physiological parameters and motion intensity data; the risk assessment module is used for predicting the exercise risk index and assessing the exercise risk level according to the physiological state matrix to generate personalized advice; the optimization feedback module is used for pushing the motion state report in real time by the user terminal according to the motion risk level and the personalized advice and recording a feedback instruction; Defining an infrared spectrum and a visible spectrum according to the illumination threshold value, acquiring joint coordinate data and face orientation angles, comprising the following steps, Dynamically adjusting an image enhancement mode by comparing the illumination entropy value with an illumination threshold value to obtain a joint thermodynamic diagram; defining an infrared spectrum and a visible spectrum through the comparative analysis of the illumination entropy value and the illumination threshold value, and outputting a spectrum mode selection instruction; Based on a spectrum mode selection instruction, dynamically acquiring a spectrum entropy weight through physiological and optical dual-source credibility evaluation, performing joint thermodynamic diagram region segmentation and spectrum differentiation enhancement, and outputting a dual-spectrum enhanced image; Acquiring joint coordinate data and a face orientation angle by combining a joint thermodynamic diagram through a gesture-physiological combined characteristic extraction method based on the double-spectrum enhanced image and the spectrum entropy weight; the physiological-motion coupling model is constructed, and a physiological state matrix is output according to physiological parameters and motion intensity data, and the steps are as follows, Acquiring a double-flow neural network architecture by combining the time sequence characteristics of physiological parameters and the dynamic characteristics of motion intensity data with a heterogeneous characteristic extractor; Based on loading physiological parameters into a double-flow neural network architecture, obtaining physiological feature vectors through one-dimensional time sequence convolution operation, and simultaneously inputting motion intensity data into a bidirectional LSTM (least squares) for time sequence modeling to obtain the motion feature vectors; the physiological feature vector and the motion feature vector are subjected to feature cascading, and a dynamic coupling weight matrix is obtained based on a cross attention mechanism; double-flow feature fusion is carried out according to the dynamic coupling weight matrix, and a physiological-motion coupling model is constructed; And (3) performing index training on the physiological-motion coupling model by adopting a multi-layer neural network, and acquiring a physiological state matrix by combining physiological parameters and motion intensity data through the trained physiological-motion coupling model.
  7. 7. A computer device comprises a memory and a processor, wherein the memory stores a computer program, and the computer program is characterized in that the processor realizes the steps of the machine vision-based motion state detection and evaluation method according to any one of claims 1 to 5 when executing the computer program.
  8. 8. A computer-readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the machine vision-based motion state detection and assessment method according to any one of claims 1 to 5.

Description

Motion state detection and evaluation method and system based on machine vision Technical Field The invention relates to the technical field of computer vision, in particular to a motion state detection and evaluation method and system based on machine vision. Background The motion state detection and evaluation technology based on machine vision occupies a vital position in the field of information security nowadays, and mainly provides a technical foundation for motion health monitoring through a non-contact sensing means. The method integrates the leading edge directions of multispectral imaging, biomechanical analysis, time sequence modeling and the like, and is widely applied to the fields of intelligent fitness equipment, rehabilitation training guidance and competitive athletic performance optimization. With the development of wearable equipment and edge computing, real-time assessment of motion state has become a core module of an intelligent health system, and plays a key role in improving training scientificity and reducing risk of motion injury. The current technology system has established a standardized processing framework from environmental awareness to risk assessment, forming a deep cross fusion of computer vision and sports science. In the field of motion state detection and evaluation based on machine vision, a traditional detection and evaluation method lacks a dynamic weight coupling mechanism of an illumination entropy value and a target biological feature in a spectrum fusion process, so that joint coordinate positioning precision in a complex illumination scene fluctuates. And parameters are statically solidified after the physiological-motion coupling model is trained, and incremental optimization cannot be driven by real-time feedback data of a user. When the individual movement pattern is suddenly abnormal, the risk assessment result deviates from the risk promotion of the real physiological state, and the detection error is compensated by relying on manual calibration. Disclosure of Invention The present invention has been made in view of the above-described problems occurring in the prior art. Therefore, the invention provides a motion state detection and evaluation method based on machine vision, which solves the problems of joint positioning precision fluctuation and sudden abnormal adaptation disorder. In order to solve the technical problems, the invention provides the following technical scheme: In a first aspect, the invention provides a machine vision-based motion state detection and evaluation method, which comprises the steps of capturing a motion video stream in real time and acquiring an illumination entropy value and an illumination threshold value; defining an infrared spectrum and a visible spectrum according to the illumination threshold value, and acquiring joint coordinate data and a face orientation angle; Extracting physiological parameters based on the face orientation angle, and simultaneously acquiring exercise intensity data by combining the change rate of joint coordinates; constructing a physiological-motion coupling model, and outputting a physiological state matrix according to physiological parameters and motion intensity data; Predicting a sports risk index and evaluating the sports risk level according to the physiological state matrix to generate personalized advice; And pushing the motion state report in real time by the user terminal according to the motion risk level and the personalized advice, and recording a feedback instruction. As an optimal scheme of the machine vision-based motion state detection and evaluation method, the method captures a motion video stream in real time, acquires an illumination entropy value and an illumination threshold value, comprises the following steps, Acquiring initial spectrum data according to a motion video stream, and acquiring gray image data according to a weighted gray conversion method; based on gray image data, acquiring a sub-block entropy value matrix through an image information entropy quantization rule, and generating an illumination entropy value according to a weighted average formula; based on the illumination entropy value, extracting a moving target region by an inter-frame difference method, and acquiring an illumination threshold by combining a joint positioning feedback optimization method. As an optimal scheme of the motion state detection and evaluation method based on machine vision, the method comprises the steps of defining an infrared spectrum and a visible spectrum according to an illumination threshold value, acquiring joint coordinate data and face orientation angles, and performing the following steps, Dynamically adjusting an image enhancement mode by comparing the illumination entropy value with an illumination threshold value to obtain a joint thermodynamic diagram, defining an infrared spectrum and a visible spectrum by comparing analysis of the illumination entropy value and