CN-122015777-A - Method for detecting standing long jump take-off violations based on vision
Abstract
The invention discloses a vision-based standing jump take-off violation detection method, which comprises the steps of collecting video images through a camera arranged on the side surface of a standing jump test area, calibrating four vertex coordinates of a rectangular take-off area and take-off line edge point coordinates, acquiring key point coordinates of a tester in the take-off process based on 19 key point identification models, including left and right shoulders, elbows, wrists, crotch, knees, feet and toes, judging whether feet are located in the take-off area, whether take-off is performed, whether feet move before take-off and whether hands contact the ground before take-off, calculating the take-off violation score by integrating all judgment results after the system is started, judging to be in compliance if the conditions that feet move in the area and no feet touch the ground are met in a set time window, otherwise judging to be in violation and outputting early warning. The method can automatically and accurately detect the illegal behaviors of standing long jump take-off, and improves the fairness and efficiency of sports test.
Inventors
- WU SHURUI
- ZHANG BOCHENG
- LIANG FAN
Assignees
- 广东先知大数据股份有限公司
Dates
- Publication Date
- 20260512
- Application Date
- 20260129
Claims (10)
- 1. A vision-based method for detecting a standing long jump take-off violation is characterized by comprising the following steps: 1) Setting a camera on the side surface of a standing long jump test area, collecting video images containing a jump area, and marking four vertex coordinates of a rectangle of the jump area to be respectively marked as a left vertex (x 1 ,y 1 ), a right vertex (x 2 ,y 1 ), a left bottom angle (x 1 ,y 2 ), a right bottom angle (x 2 ,y 2 ) and two edge point coordinates of a jump wire, wherein the two edge point coordinates are respectively an edge point (x 2 ,y 3 ) close to the camera and an edge point (x 2 ,y 4 ) far away from the camera; 2) Based on a human body key point recognition model, 19 key point coordinates of a tester in the jumping process are obtained, wherein the coordinates comprise coordinates of a left shoulder, a right shoulder, a left elbow, a right elbow, a left wrist, a right wrist, a left crotch, a right crotch, a left knee, a right knee, a left foot, a right foot, a left toe and a right toe; 3) Judging whether the feet of the tester are positioned in the jump area rectangle according to the shoulder coordinates (slx i ,sly i ), the right shoulder coordinates (srx i ,sry i ), the left elbow coordinates (elx i ,ely i ), the right elbow coordinates (erx i ,ery i ), the left wrist coordinates (hlx i ,hly i ), the right wrist coordinates (hrx i ,hry i ), the left crotch coordinates (ulx i ,uly i ), the right crotch coordinates (urx i ,ury i ), the left knee coordinates (klx i ,kly i ), the right knee coordinates (krx i ,kry i ), the left foot coordinates (flx i ,fly i ), the right foot coordinates (frx i ,fry i ), the left toe coordinates (blx i ,bly i ) and the right toe coordinates (brx i ,bry i ) of the key points in each frame of image, and judging that the feet are positioned in the jump area if the transverse and longitudinal coordinates of the key points of the feet are positioned in the jump area; 4) Judging whether the testers take off, if the abscissas of all lower limbs and trunk key points are larger than the abscissas of the take-off line, judging that the testers take off; 5) Judging whether the test personnel have the step movement before taking off, calculating the displacement distance between the toe and ankle key points relative to the initial frame, comparing the displacement distance with a set threshold value, and judging that the test personnel have no step movement if the displacement is smaller than the threshold value; 6) Judging whether the hands of the tester contact the ground before the jump wire, and judging that the hands touch the ground if the set conditions are met by calculating the posture angle of the trunk and the geometric relationship between the wrists and the ground; 7) And from the nth 1 frame after the system sends out the starting instruction, comprehensively calculating the jump violation score based on the judging result of the steps, judging that the jump is legal if the conditions that the feet move in the area without the steps and touch the ground by hands are met within a certain time window, otherwise, judging that the jump is illegal, and outputting illegal early warning.
- 2. The method for detecting the break-over of the standing jump based on vision as claimed in claim 1, wherein the 19 key point identification models in the step 2) are obtained by additionally adding two key points of a left toe and a right toe based on a traditional 17-point human body model under a YOLO frame and training by using manually marked break-over stage image samples.
- 3. The method for detecting the break-over rule of the standing-jump based on vision as set forth in claim 1, wherein in the step 3), when the system issues a start command, the position of the key point of the person in the rectangle of the take-over area of each frame i is obtained, and when judging whether the feet are located in the rectangle of the take-over area, the judgment is made by calculating a two-foot position score g1 i : , Wherein g i =1 indicates that both feet of the person are in the detection area, otherwise, it is determined that both feet are out of range.
- 4. The method for detecting the break-over rule of the standing-off jump based on vision according to claim 1, wherein in the step 4), when judging whether the person has tripped, the step is judged by calculating a trip score g2 i : , wherein g i =1 indicates that the person has tripped, otherwise it is determined that the person has not tripped.
- 5. The method for detecting the break-over rule of the standing jump based on vision as set forth in claim 1, wherein said step 5) judges whether the step movement exists before the break-over by calculating a step movement score g3 i : , The method comprises the steps of obtaining a first judgment threshold value, a second judgment threshold value and a third judgment threshold value, wherein ts 1 is a set first judgment threshold value and is obtained by collecting the maximum value of the fluctuation distance range of the toe coordinates before jump, and ts 2 is a set second judgment threshold value and is obtained by collecting the maximum value of the fluctuation distance range of the ankle coordinates before jump; g3 i =1 indicates that no movement of the feet of the person has occurred, otherwise it is determined that there is a step movement.
- 6. The method for detecting the break-over rule of the standing jump based on vision as set forth in claim 5, wherein in the step 6), when judging whether the hands of the tester contact the ground before the jump wire, the step is to judge by calculating the contact score g4 i : , Where g41 i represents the left hand touchdown score, g42 i represents the right hand touchdown score: , gx41 i is left hand floor abscissa and gx42 i is right hand floor abscissa: , ts 3 is a third set judgment threshold value, and is obtained by collecting the ground image before the contact line before jump and jump to obtain the cosine value corresponding to the maximum value of the included angle between the trunk and the normal line of the vertical ground; ts 4 is a set fourth judgment threshold value, and is obtained by collecting ground images before the contact line before jump take-off and obtaining the maximum value of the ratio of the hand tip to the wrist to the elbow distance.
- 7. The method for detecting the break-over rule of the standing-off jump based on vision as set forth in claim 6, wherein in the step 7), the judgment of whether the break-over rule is broken or not before the start of the detection of the break-over by the tester in the n 1 th frame is performed by calculating a break-over rule score g5 i : , The ts 5 is a set fifth judgment threshold value, and is obtained by collecting the minimum value of the ratio of the illegal action frames in a time window of n 1 frames corresponding to each frame in the illegal jump video, and n 1 is a positive integer obtained by calculating the minimum value of the continuous illegal action frames when illegal actions occur; When g5 i =1, the personnel is judged to take off and have violations, otherwise, g5 i =0, and the compliance is judged.
- 8. The method for detecting the break-over violations of the standing long jump based on vision as claimed in claim 7, wherein when g i =1, the system immediately sends out break-over early warning to the test terminal in an acousto-optic or wireless message mode when the break-over is judged to exist on the personnel break-over, and the system automatically resets to a state waiting for the break-over.
- 9. The method for detecting a break-over of a standing jump based on vision as set forth in claim 7, wherein all of the threshold ts 1 -ts 5 frame numbers n 1 are dynamically configurable by an administrator at system initialization based on field, camera resolution and test level and stored in a local configuration file or a remote database.
- 10. The method for detecting the standing-off take-off violation based on vision according to claim 9, wherein the installation height of the camera is 1.0-1.5m, the vertical distance between the optical axis and a take-off wire is 0.8-1.2m, and the taking-off area rectangle is ensured to occupy more than or equal to 60% in a picture.
Description
Method for detecting standing long jump take-off violations based on vision Technical Field The invention relates to the technical field of body measurement, in particular to a vision-based method for detecting a standing long jump take-off violation. Background Standing long jump is an important item in physical education and physical testing, and the standardability of the jump action directly influences the accuracy and fairness of the test result. Traditional standing long jump tests mainly rely on manual observation and subjective judgment to identify taking-off illegal behaviors, but with the continuous improvement of test standardization requirements, an intelligent detection method based on a computer vision technology gradually becomes a research hot spot. In the prior art, a plurality of standing long jump test schemes based on vision technology exist. Chinese patent CN113137923a discloses a method for measuring the performance of standing long jump, which uses camera and machine vision technique to realize the intelligent measurement of the performance of standing long jump, and includes setting test area and motion parameters, collecting user test video, analyzing the user test video to obtain the performance. Chinese patent CN114712769a proposes a computer vision-based intelligent distance measurement method and system for standing long jump, which is to take a picture of the landing point on the blanket during the standing long jump by combining computer vision with human body key point recognition technology, and finally confirm the landing point to calculate the long jump score by human body key recognition. Chinese patent CN110624203B discloses a method and apparatus for free standing long jump test, which uses camera and machine vision to perform rule judgment and performance measurement of free standing long jump test, and breaks through the limitation that the traditional standing long jump test needs to rely on jump wire. Chinese patent CN117379769a discloses a method for analyzing and ranging the motions of standing long-jump based on computer vision, which is based on time sequence motion positioning of key points of human skeleton, so that the motion analysis and ranging of the whole standing long-jump can be realized, and not only the foul can be determined, but also the distance can be automatically measured. Chinese patent CN117582649B provides a computer vision based three-level frog-leaping test method and system, wherein the two cameras are used for monitoring the preparation area and the long jump area respectively, and judging the jump violation and the jump noncontinuous violation in real time. However, the prior art still has the following defects in the aspect of standing long jump starting violation detection, firstly, the prior art mainly focuses on score measurement and landing place detection, the detection of the violation behavior in the starting stage is not accurate enough, the accurate recognition capability of fine violation actions such as foot movement and hand touch is lacking, secondly, the prior human body key point recognition model mainly adopts a traditional 17-point model, the detection points of key parts such as toes are lacking, the precise judgment of the violation behavior such as pedal lines is difficult, and thirdly, the prior art lacks a scoring mechanism integrating various violation judgment conditions, and the stable and reliable violation detection result cannot be provided in a complex test environment, so that the practicability and the accuracy of a detection system are influenced. Disclosure of Invention In order to solve the technical problems of observation limitation, strong subjectivity, inconsistent standards and the like existing in the traditional standing-jump take-off violation judgment mainly depending on the on-site observation and subjective judgment of teachers, the invention provides a vision-based standing-jump take-off violation detection method. The invention aims at realizing the following technical scheme that the method for detecting the break-off rule of the standing long jump based on vision comprises the following steps: 1) Setting a camera on the side surface of a standing long jump test area, collecting video images containing a jump area, and marking four vertex coordinates of a rectangle of the jump area to be respectively marked as a left vertex (x 1,y1), a right vertex (x 2,y1), a left bottom angle (x 1,y2), a right bottom angle (x 2,y2) and two edge point coordinates of a jump wire, wherein the two edge point coordinates are respectively an edge point (x 2,y3) close to the camera and an edge point (x 2,y4) far away from the camera; 2) Based on a human body key point recognition model, 19 key point coordinates of a tester in the jumping process are obtained, wherein the coordinates comprise coordinates of a left shoulder, a right shoulder, a left elbow, a right elbow, a left wrist, a right wrist, a left crotch