Search

CN-121999966-A - Static training auxiliary device and method thereof

CN121999966ACN 121999966 ACN121999966 ACN 121999966ACN-121999966-A

Abstract

The application discloses a static training auxiliary device and a method thereof, which are used for acquiring training action images of a training object acquired by a camera, extracting training action reference images from a background database, extracting human body posture features of the training action images and the training action reference images to obtain training action human body posture depth feature images and training reference human body posture depth feature images, calculating training action semantic difference feature images between the training action human body posture depth feature images and the training reference human body posture depth feature images, and determining whether training actions of the training object are standard or not based on the training action semantic difference feature images. Therefore, whether the training action of the training object is standard or not can be intelligently judged through the training action semantic difference information, so that timely feedback is provided for the training object, the old can correctly understand and master the standing pile action, and potential risks caused by incorrect postures or actions are reduced.

Inventors

  • Chen Chuanbang

Assignees

  • 温州医科大学附属第一医院

Dates

Publication Date
20260508
Application Date
20240202

Claims (8)

  1. 1. A static training aid method, comprising: acquiring a training action image of a training object acquired by a camera; extracting training action reference images from a background database; Extracting human body posture features of the training action image and the training action reference image to obtain a training action human body posture depth feature map and a training reference human body posture depth feature map; Calculating a training action semantic difference feature map between the training action human body posture depth feature map and the training reference human body posture depth feature map; and determining whether the training action of the training object is standard or not based on the training action semantic difference feature diagram.
  2. 2. The static training aid method of claim 1, wherein extracting human body pose features of the training motion image and the training motion reference image to obtain a training motion human body pose depth feature map and a training reference human body pose depth feature map comprises: The training action image and the training action reference image pass through a training object human body component positioner based on DensePose networks to obtain a training action human body posture estimation feature map and a training reference human body posture estimation feature map; and the training action human body posture estimation feature map and the training reference human body posture estimation feature map are obtained through a training posture twin detection model comprising a first human body posture depth feature encoder and a second human body posture depth feature encoder.
  3. 3. The static training assistance method of claim 2, wherein passing the training motion human pose estimation feature map and the training reference human pose estimation feature map through a training pose twinning detection model comprising a first human pose depth feature encoder and a second human pose depth feature encoder to obtain the training motion human pose depth feature map and the training reference human pose depth feature map comprises: Passing the training motion human body posture estimation feature map through the first human body posture depth feature encoder to obtain the training motion human body posture depth feature map; And passing the training reference human body posture estimation feature map through the second human body posture depth feature encoder to obtain the training reference human body posture depth feature map.
  4. 4. A static training aid method according to claim 3, wherein computing a training action semantic difference feature map between the training action human pose depth feature map and the training reference human pose depth feature map comprises: calculating the training action semantic difference feature map between the training action human body posture depth feature map and the training reference human body posture depth feature map by using the following difference formula, wherein the difference formula is as follows: wherein, the method comprises the steps of, Is the training action human body posture depth characteristic map, Is the training reference human posture depth feature map, Is the training action semantic difference feature map, Representing per-position subtraction.
  5. 5. The static training assistance method of claim 4, wherein determining whether the training actions of the training object are canonical based on the training action semantic difference feature map comprises: The training action semantic difference feature map is passed through a self-adaptive attention module to obtain a training action semantic difference self-adaptive strengthening feature map; performing feature distribution optimization on the training action semantic difference self-adaptive reinforcement feature map to obtain an optimized training action semantic difference self-adaptive reinforcement feature map; And the optimized training action semantic difference self-adaptive reinforcement feature map is passed through a classifier to obtain a classification result, wherein the classification result is used for indicating whether the training action of the training object is standard or not.
  6. 6. The static training assistance method according to claim 5, wherein passing the training action semantic difference feature map through an adaptive attention module to obtain a training action semantic difference adaptive reinforcement feature map comprises: processing the training action semantic difference feature map with the following self-adaptive attention formula to obtain the training action semantic difference self-adaptive strengthening feature map, wherein the self-adaptive attention formula is as follows: wherein, the method comprises the steps of, For the training action semantic difference feature map, For the purpose of the pooling treatment, For the purpose of pooling the vectors, Is a matrix of weights that are to be used, Is the offset vector of the reference signal, In order to activate the process, For the initial meta-weight feature vector, Is the first of the initial element weight feature vectors The characteristic value of the individual position is used, In order to correct the metadata feature vector, Is the training action semantic difference self-adaptive reinforcement characteristic diagram, And multiplying the feature value in the correction element weight feature vector by each feature matrix of the training action semantic difference feature graph along the channel dimension.
  7. 7. The static training assistance method according to claim 6, wherein the step of passing the optimized training action semantic difference adaptive reinforcement feature map through a classifier to obtain a classification result, wherein the classification result is used for indicating whether the training action of the training object is normalized, and the step of: expanding the optimized training action semantic difference self-adaptive reinforcement feature map into classification feature vectors according to row vectors or column vectors; Fully-concatenated coding the classification feature vectors using multiple fully-concatenated layers of the classifier to obtain coded classification feature vectors, and And the coding classification feature vector is passed through a Softmax classification function of the classifier to obtain the classification result.
  8. 8. A static training aid, characterized in that it operates in a static training aid method according to claims 1 to 7.

Description

Static training auxiliary device and method thereof Technical Field The application relates to the technical field of intelligent auxiliary devices, in particular to a static training auxiliary device and a method thereof. Background The weakness is an senile syndrome with increased organism vulnerability caused by degenerative changes of organisms, and the key point is that the physiological reserve function of the elderly is reduced, and the elderly can be negatively affected by smaller external stimulus, so that the senile is considered as a potential serious public health problem by more and more students at present. The third edition of the medical science for elderly people, which details the way of early intervention in the debilitation of elderly people, can be performed by nutritional support, exercise, medicines or combinations thereof, wherein physical exercise is most effective and popular. The exercise mode is various, and how to find out the exercise method which is safer, easy to accept and high-efficiency in the daily exercise process of the old, and develop the proper exercise is a great difficulty. The standing pile (copper Zhong Gong) is used as a martial arts action, gathers the wisdom and practice of the modern people, and is not only a basic action of increasing lower limb strength and fight stability in martial arts, but also a common work method for health care. The training of the standing pile needs professional guidance and training. However, it is difficult to find a guiding person with high experience and high professional level in most areas at present, and real-time feedback cannot be provided for each old person one by one. In addition, in some remote areas or where resources are scarce, specialized instruction and training facilities may be lacking. This limits the chances that the elderly will get station pile guidance and training. Accordingly, a static training aid and method therefor are desired. Disclosure of Invention The application provides a static training auxiliary device and a method thereof, which are used for acquiring training action images of a training object acquired by a camera, extracting training action reference images from a background database, extracting human body posture features of the training action images and the training action reference images to obtain training action human body posture depth feature images and training reference human body posture depth feature images, calculating training action semantic difference feature images between the training action human body posture depth feature images and the training reference human body posture depth feature images, and determining whether training actions of the training object are standard or not based on the training action semantic difference feature images. Therefore, whether the training action of the training object is standard or not can be intelligently judged through the training action semantic difference information, so that timely feedback is provided for the training object, the old can correctly understand and master the standing pile action, and potential risks caused by incorrect postures or actions are reduced. The application also provides a static training auxiliary method, which comprises the following steps: acquiring a training action image of a training object acquired by a camera; extracting training action reference images from a background database; Extracting human body posture features of the training action image and the training action reference image to obtain a training action human body posture depth feature map and a training reference human body posture depth feature map; Calculating a training action semantic difference feature map between the training action human body posture depth feature map and the training reference human body posture depth feature map; and determining whether the training action of the training object is standard or not based on the training action semantic difference feature diagram. In the static training auxiliary method, the human body posture characteristics of the training action image and the training action reference image are extracted to obtain a training action human body posture depth characteristic image and a training reference human body posture depth characteristic image, and the method comprises the steps of enabling the training action image and the training action reference image to pass through a training object human body component positioner based on DensePose networks to obtain a training action human body posture estimation characteristic image and a training reference human body posture estimation characteristic image, and enabling the training action human body posture estimation characteristic image and the training reference human body posture estimation characteristic image to pass through a training posture twin detection model comprising a first human body posture depth characteristic encoder and a sec