Search

CN-121033942-B - Personalized running training service method, system and medium

CN121033942BCN 121033942 BCN121033942 BCN 121033942BCN-121033942-B

Abstract

The invention belongs to the field of running data processing, and discloses a personalized running training service method, a personalized running training system and a personalized running training medium, wherein the personalized running training service method comprises the following steps of S1, acquiring a video frame sequence and a speed sequence of an athlete during movement on a running machine; S2, segmenting a video frame sequence based on a speed sequence to obtain a plurality of sub-video frame sequences, S3, respectively extracting the foreground of the video frames in each sub-video frame sequence to obtain foreground images, and S4, respectively analyzing each foreground image to obtain the analysis result of the running gesture of the athlete. The invention can utilize the characteristic of higher foreground similarity in the sub-video frame sequence to acquire the foreground image, can avoid carrying out inter-frame differential processing on all the image frames, effectively improves the acquisition efficiency of the foreground image, and can also effectively improve the integrity of the foreground image.

Inventors

  • HU AIPING

Assignees

  • 北京奥康达体育科技有限公司

Dates

Publication Date
20260505
Application Date
20250923

Claims (10)

  1. 1. A personalized running training service method, comprising: S1, acquiring a video frame sequence and a speed sequence of an athlete when the athlete moves on a running machine; s2, dividing the video frame sequence based on the speed sequence to obtain a plurality of sub-video frame sequences, wherein the method comprises the following steps: first sub-video frame sequence N Zhang Shipin frames are included, wherein N is the preset number; for an nth sub-video frame sequence, n is greater than or equal to 2, the determining of the number of video frames contained in the nth sub-video frame sequence includes: acquiring a first frame in an n-1 th sub-video frame sequence And last frame , Representing the total number of video frames contained in the n-1 th video frame; Respectively obtain And Foreground region in (a) And ; Based on And Calculating a first control parameter, comprising: first, calculate And Similarity siml between the two; Second step, calculate Center of (2) The distance dm between the centers of (2); Third, calculating a first control parameter: as a first control parameter, For the second weight, nz represents normalization; acquiring a sub-speed sequence corresponding to a video frame in the n-1 sub-video frame from the speed sequence; Calculating a second control parameter based on the sub-speed sequence, comprising: the first step, obtain the weighted rotation speed based on the sub-speed sequence, including: the weighted rotational speed is calculated using the following formula: in order to weight the rotational speed, For the ith rotational speed in the sequence of sub-speeds, NS is the total number of rotational speeds in the sequence of sub-speeds; The second step, obtain the rotational speed fluctuation value in the sub-speed sequence, including: Acquiring a linear regression line corresponding to the rotation speed of the sub-speed sequence; taking a normalized value corresponding to the absolute value of the slope of the linear regression line as a rotation speed fluctuation value k; third, calculating a second control parameter based on the weighted rotational speed and the rotational speed fluctuation value, including: as a second control parameter, For the maximum value of rotational speed in the sequence of sub-speeds, Is a third weight; calculating the number of video frames contained in the nth sub-video frame sequence based on the first control parameter and the second control parameter; s3, respectively extracting the foreground of the video frames in each sub-video frame sequence to obtain a foreground image; S4, respectively analyzing each foreground image to obtain an analysis result of the running gesture of the athlete.
  2. 2. The personalized running training service method of claim 1, wherein the speed sequence comprises a time sequence of rotational speeds of a running belt of the treadmill.
  3. 3. The personalized running training service method according to claim 1, wherein the number of video frames contained in the video frame sequence is the same as the number of rotational speeds contained in the speed sequence; each video frame in the video frame sequence corresponds to a rotation speed in the speed sequence, and the rotation speed is obtained as follows: for a video frame with a generation time t Then The corresponding rotation speed is , Is the rotational speed of the running belt of the running machine at the time t.
  4. 4. The personalized running training service method according to claim 1, wherein the steps of respectively acquiring And Foreground region in (a) And Comprising: For video frames , ∈{ , }, The acquisition process of the foreground region of the map is as follows: Acquisition using an inter-frame difference algorithm Is a first region of (a); Performing region growing treatment on pixel points at the edge of the first region to obtain a second region; And screening the second region to obtain a foreground region.
  5. 5. The personalized running training service method according to claim 4, wherein the inter-frame difference algorithm is used for obtaining Comprises: If it is For the first frame in the video frame sequence, the second frame in the video frame sequence is taken as a comparison frame Otherwise, obtain from the video frame sequence Is the previous video of (a) frame as comparison frame ; Based on Separately calculate Gray difference value of each pixel point; Will be And taking the pixel point with the middle gray level difference value larger than the adaptive gray level threshold value as the pixel point of the first area.
  6. 6. The personalized running training service method according to claim 5, wherein the calculating steps are performed separately Gray level differences for each pixel of (a) comprising: For the following The coordinates of (a) are Is a pixel of (1) The gray level difference value is calculated by the following formula: Is that Is used for the gray level difference value of (c), Is that Is used for the gray-scale value of (c), Is that Is used for the gray-scale value of (c), Is that The middle coordinates are Is a pixel of (a) a pixel of (b).
  7. 7. The personalized running training service method according to claim 4, wherein performing region growing processing on pixels at an edge of the first region to obtain the second region comprises: Acquiring a set A of pixel points at the edge of a first area; At the position of And (3) respectively taking each pixel point in the A as a seed point to perform region growth treatment to obtain a second region.
  8. 8. The personalized running training service method of claim 1, wherein the value of N is 150.
  9. 9. The personalized running training service system is characterized by comprising an acquisition module, a segmentation module, an extraction module and an analysis module; the acquisition module is used for acquiring a video frame sequence and a speed sequence of an athlete when the athlete moves on the running machine; the segmentation module is used for segmenting the video frame sequence based on the speed sequence to obtain a plurality of sub-video frame sequences, and comprises the following steps: first sub-video frame sequence N Zhang Shipin frames are included, wherein N is the preset number; for an nth sub-video frame sequence, n is greater than or equal to 2, the determining of the number of video frames contained in the nth sub-video frame sequence includes: acquiring a first frame in an n-1 th sub-video frame sequence And last frame , Representing the total number of video frames contained in the n-1 th video frame; Respectively obtain And Foreground region in (a) And ; Based on And Calculating a first control parameter, comprising: first, calculate And Similarity siml between the two; Second step, calculate Center of (2) The distance dm between the centers of (2); Third, calculating a first control parameter: as a first control parameter, For the second weight, nz represents normalization; acquiring a sub-speed sequence corresponding to a video frame in the n-1 sub-video frame from the speed sequence; Calculating a second control parameter based on the sub-speed sequence, comprising: the first step, obtain the weighted rotation speed based on the sub-speed sequence, including: the weighted rotational speed is calculated using the following formula: in order to weight the rotational speed, For the ith rotational speed in the sequence of sub-speeds, NS is the total number of rotational speeds in the sequence of sub-speeds; The second step, obtain the rotational speed fluctuation value in the sub-speed sequence, including: Acquiring a linear regression line corresponding to the rotation speed of the sub-speed sequence; taking a normalized value corresponding to the absolute value of the slope of the linear regression line as a rotation speed fluctuation value k; third, calculating a second control parameter based on the weighted rotational speed and the rotational speed fluctuation value, including: as a second control parameter, For the maximum value of rotational speed in the sequence of sub-speeds, Is a third weight; calculating the number of video frames contained in the nth sub-video frame sequence based on the first control parameter and the second control parameter; The extraction module is used for respectively carrying out foreground extraction on the video frames in each sub-video frame sequence to obtain a foreground image; the analysis module is used for respectively analyzing each foreground image to obtain an analysis result of the running gesture of the athlete.
  10. 10. A computer readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the steps of the method according to any one of claims 1-8.

Description

Personalized running training service method, system and medium Technical Field The present invention relates to the field of running data processing, and in particular, to a personalized running training service method, system and medium. Background In the process of analyzing the exercise posture of a person on a treadmill, a video frame sequence of the person exercising on the treadmill is generally obtained through a camera, and then the video frame sequence is analyzed, so that the direction in which the running posture of the person needs to be improved is obtained. In the prior art, in order to improve the analysis efficiency, the foreground region is generally extracted from each video frame before the analysis, so that the number of pixels required to be calculated in the feature extraction process can be reduced, and the analysis efficiency is improved. The existing method for extracting features of a video frame sequence is generally inter-frame difference, namely comparing a current frame with a previous frame in sequence, and taking a region with larger change as a foreground region. The method has the defects that more pixel points are required to be compared every time, so that the foreground extraction efficiency is affected, and the obtained foreground area is easy to have holes, so that the quality of the subsequently obtained characteristics is affected, and the analysis result of the running gesture is not accurate enough. Disclosure of Invention The invention aims to disclose a personalized running training service method, a personalized running training service system and a personalized running training service medium, and solves the technical problems in the background technology. In order to achieve the above purpose, the present invention provides the following technical solutions: In a first aspect, the present invention provides a personalized running training service method, comprising: S1, acquiring a video frame sequence and a speed sequence of an athlete when the athlete moves on a running machine; s2, dividing the video frame sequence based on the speed sequence to obtain a plurality of sub-video frame sequences, wherein the method comprises the following steps: first sub-video frame sequence N Zhang Shipin frames are included, wherein N is the preset number; for an nth sub-video frame sequence, n is greater than or equal to 2, the determining of the number of video frames contained in the nth sub-video frame sequence includes: acquiring a first frame in an n-1 th sub-video frame sequence And last frame,Representing the total number of video frames contained in the n-1 th video frame; Respectively obtain AndForeground region in (a)And; Based onAndCalculating a first control parameter; acquiring a sub-speed sequence corresponding to a video frame in the n-1 sub-video frame from the speed sequence; Calculating a second control parameter based on the sub-speed sequence; calculating the number of video frames contained in the nth sub-video frame sequence based on the first control parameter and the second control parameter; s3, respectively extracting the foreground of the video frames in each sub-video frame sequence to obtain a foreground image; S4, respectively analyzing each foreground image to obtain an analysis result of the running gesture of the athlete. Preferably, the speed sequence comprises a time sequence of rotational speeds of a running belt of the treadmill. Preferably, the number of video frames contained in the sequence of video frames is the same as the number of rotational speeds contained in the sequence of speeds; each video frame in the video frame sequence corresponds to a rotation speed in the speed sequence, and the rotation speed is obtained as follows: for a video frame with a generation time t ThenThe corresponding rotation speed is,Is the rotational speed of the running belt of the running machine at the time t. Preferably, the steps of respectively obtainingAndForeground region in (a)AndComprising: For video frames ,∈{,},The acquisition process of the foreground region of the map is as follows: Acquisition using an inter-frame difference algorithm Is a first region of (a); Performing region growing treatment on pixel points at the edge of the first region to obtain a second region; And screening the second region to obtain a foreground region. Preferably, inter-frame differential algorithm acquisition is usedComprises: If it is For the first frame in the video frame sequence, the second frame in the video frame sequence is taken as a comparison frameOtherwise, obtain from the video frame sequenceIs the previous video of (a) frame as comparison frame; Based onSeparately calculateGray difference value of each pixel point; Will be And taking the pixel point with the middle gray level difference value larger than the adaptive gray level threshold value as the pixel point of the first area. Preferably, the calculation is performed separatelyGray level di