CN-122018695-A - Gesture data acquisition method, system and storage medium based on virtual reality technology
Abstract
The application relates to the technical field of robots, and discloses a gesture data acquisition method, a gesture data acquisition system and a computer-readable storage medium based on a virtual reality technology, wherein the method comprises the steps of determining current real gesture data when a user executes a preset task; the method comprises the steps of mapping current real gesture data into current virtual gesture data of a virtual human body model in virtual reality equipment, driving the virtual human body model to move along with a user according to the current virtual gesture data, determining the current virtual gesture data into re-recording gesture data and generating action re-recording guide information corresponding to the re-recording gesture data if the current virtual gesture data does not meet the action quality requirement, and displaying the action re-recording guide information to the user based on the virtual reality equipment so as to guide the user to re-execute re-recording task fragments corresponding to the re-recording gesture data in a preset task. By the mode, the quality of the acquired attitude data is improved.
Inventors
- JIN SIBO
- PENG SHIJIA
- WANG CHAORAN
- ZHANG ZHANPENG
Assignees
- 深圳超维动力智能科技有限公司
Dates
- Publication Date
- 20260512
- Application Date
- 20260211
Claims (10)
- 1. The gesture data acquisition method based on the virtual reality technology is characterized by comprising the following steps of: Determining current real gesture data when a user executes a preset task based on a joint tracking device worn by the user; mapping the current real gesture data into current virtual gesture data of a virtual human body model in virtual reality equipment, wherein the virtual reality equipment is used for being worn on the head of the user and used for displaying the virtual human body model to the user; Driving a virtual human model in the virtual reality equipment to follow the user to move according to the current virtual gesture data; Judging whether the current virtual gesture data meets a preset action quality requirement or not; If the current virtual gesture data does not meet the action quality requirement, determining the current virtual gesture data as re-recording gesture data, and generating action re-recording guide information corresponding to the re-recording gesture data; displaying the action re-recording guide information to the user based on the virtual reality equipment so as to guide the user to re-execute re-recording task fragments corresponding to the re-recording gesture data in the preset task; And storing real gesture data corresponding to the virtual gesture data meeting the action quality requirement in the process of executing the preset task by the user so as to complete gesture data acquisition.
- 2. The method of claim 1, wherein the re-recording task segments correspond to a plurality of virtual pose data, wherein after the act re-recording guidance information is displayed to the user based on the virtual reality device to guide the user to re-execute the re-recording task segments of the preset task corresponding to the re-recording pose data, the method further comprises: generating prompt information corresponding to the re-recorded gesture data, wherein the prompt information is used for representing that the current gesture of the virtual human body model is in a disqualified gesture; And driving the virtual human body model in the virtual reality equipment to move again according to the plurality of virtual gesture data corresponding to the re-recording task segment, and presenting the prompt information in the virtual reality equipment when the virtual human body model in the virtual reality equipment moves to the gesture corresponding to the re-recording gesture data.
- 3. The method according to claim 1, wherein storing the real gesture data corresponding to the virtual gesture data meeting the action quality requirement in the process of executing the preset task by the user to complete gesture data acquisition includes: Identifying task segments repeatedly executed by the user in the process of executing the preset task to obtain at least one repeated task segment group, wherein the repeated task segment group comprises task segments repeatedly executed for a plurality of times, and each executed task segment corresponds to a plurality of virtual gesture data; determining the quantity of the re-recorded gesture data in the virtual gesture data corresponding to each executed task segment in the repeated task segment group as the re-recorded quantity of the executed task segment; And storing the real gesture data corresponding to the task segment with the least re-recorded quantity in the repeated task segment group and the real gesture data corresponding to the task segment which is not repeatedly executed in the process of executing the preset task by the user so as to complete gesture data acquisition.
- 4. The method of claim 1, wherein the determining whether the current virtual pose data meets a preset motion quality requirement comprises: performing preliminary judgment on the current virtual gesture data according to a preset motion rationality rule to obtain a preliminary judgment result; If the preliminary judgment result is that the current virtual gesture is reasonable, determining that the current virtual gesture data meets the action quality requirement; And if the preliminary judgment result is that the current virtual posture is unreasonable, inputting the current virtual posture data into a pre-trained visual language model to obtain a model judgment result, if the model judgment result is that the current virtual posture is reasonable, determining that the current virtual posture data meets the action quality requirement, and if the model judgment result is that the current virtual posture is unreasonable, determining that the current virtual posture data does not meet the action quality requirement.
- 5. The method of claim 4, wherein the motion rationality rules include a joint movement rule and a joint distance rule, and the performing preliminary determination on the current virtual pose data according to a preset motion rationality rule to obtain a preliminary determination result includes: Determining the joint movement degree of the current virtual gesture according to key joint data in the previous virtual gesture data and key joint data in the current virtual gesture data; determining joint distances among a plurality of key joints in the current virtual gesture according to key joint data in the current virtual gesture data; and if the joint movement degree meets the joint movement rule and the joint distance among the plurality of key joints meets the joint distance rule, judging that the current virtual gesture is reasonable.
- 6. The method of claim 5, wherein the joint movement rules include that the degree of joint movement does not exceed a preset degree of joint movement; The joint distance rule comprises that joint distances among the plurality of key joints are all located in a preset joint distance interval; or the joint distance rule comprises that the current distance proportion is positioned in a preset reasonable proportion interval corresponding to the distance proportion item; The judging process of the current distance proportion in the preset reasonable proportion interval corresponding to the distance proportion item comprises the following steps: Determining a first distance of a first joint set required for forming a distance proportion item and a second distance of a second joint set according to the preset distance proportion item based on the current virtual gesture data, wherein the first distance is a geometric measure calculated according to the space coordinates of a plurality of joints contained in the first joint set, and the second distance is a geometric measure calculated according to the space coordinates of a plurality of joints contained in the second joint set; determining the ratio of the first distance to the second distance as the current distance proportion of the distance proportion term; judging whether the current distance proportion is positioned in a preset reasonable proportion interval corresponding to the distance proportion item; If the current distance proportion is located in a preset reasonable proportion interval corresponding to the distance proportion item, determining that the joint distance between the plurality of key joints meets the joint distance rule; Wherein the distance proportion term defines a first joint set and a second joint set according to the distance proportion term, and the relation between the first joint set and the second joint set at least comprises one of adjacent relation of the ipsilateral limb, symmetrical relation about the body midline or geometric relation which jointly form a specific function triangle.
- 7. The method of claim 5, wherein the joint tracking device comprises a plurality of joint trackers for wearing on a critical torso location of the user, the critical torso location comprising at least one of a head, torso, arms, hands, pelvis, and lower extremities, the critical joint data being data corresponding to the critical torso location.
- 8. The gesture data acquisition system based on the virtual reality technology is characterized by comprising a server, a joint tracking device and virtual reality equipment, wherein the server is respectively in communication connection with the joint tracking device and the virtual reality equipment; the joint tracking device is used for being worn by a user, collecting current real gesture data when the user executes a preset task, and sending the current real gesture data to the server; the server is configured to perform the method of any one of claims 1 to 7, wherein the server generates driving data for driving the virtual mannequin according to the current real gesture data, and sends the driving data and action re-recording guide information corresponding to the re-recording gesture data to the virtual reality device; The virtual reality device is used for displaying the motion state of the virtual human body model and the action re-recording guide information, and driving the virtual human body model to move along with the user according to the driving data.
- 9. The gesture data acquisition system based on the virtual reality technology is characterized by comprising a joint tracking device and virtual reality equipment, wherein the joint tracking device and the virtual reality equipment are in communication connection; The joint tracking device is used for being worn by a user, collecting current real gesture data when the user executes a preset task, and sending the current real gesture data to the virtual reality equipment; the virtual reality device is configured to perform the method of any one of claims 1-7.
- 10. A computer-readable storage medium, on which a computer program is stored, characterized in that the computer program, when executed by a processor, implements the attitude data acquisition method based on virtual reality technology according to any one of claims 1 to 7.
Description
Gesture data acquisition method, system and storage medium based on virtual reality technology Technical Field The embodiment of the application relates to the technical field of robots, in particular to a gesture data acquisition method, a gesture data acquisition system and a computer readable storage medium based on a virtual reality technology. Background With the rapid development of computer technology, sensor technology, artificial intelligence and robot technology, humanoid robot technology is continuously improved and perfected, and particularly, the breakthrough of deep learning, computer vision and other technologies is achieved, so that stronger perception, learning and decision making capability is provided for the humanoid robot. Humanoid robots have wide application prospects in many fields, such as home services, medical care, public safety and the like, and the demand for robots capable of replacing human beings to perform repeated, tedious or dangerous work is increasing due to the improvement of living standards of people and the rise of labor costs. When the robot is trained to complete a specific task, gesture data corresponding to the specific task needs to be collected to teach the robot. However, in the prior art, when the gesture data is collected, the gesture data is usually collected according to a preset flow or artificial experience, and the problems of abnormal joints, abrupt gesture changes or discontinuous states in the collection process are ignored, and the data quality defect is often found only through offline playback or post-processing analysis after the data collection is completed, so that a large amount of invalid or low-value data exceeding the capacity range of the robot is generated in the collection process, and only a small amount of the collected large amount of gesture data is available. Therefore, how to improve the quality of the collected gesture data is a technical problem to be solved. Disclosure of Invention In view of the above problems, embodiments of the present application provide a method, a system, and a computer readable storage medium for acquiring gesture data based on a virtual reality technology, which are used for solving the problem in the prior art that the quality of the acquired gesture data is too low. According to one aspect of the embodiment of the application, a gesture data acquisition method based on a virtual reality technology is provided, and the gesture data acquisition method comprises the steps of determining current real gesture data when a user executes a preset task based on a joint tracking device worn by the user, mapping the current real gesture data into current virtual gesture data of a virtual human body model in virtual reality equipment, wherein the virtual reality equipment is used for being worn on a head of the user and used for displaying the virtual human body model to the user, driving the virtual human body model in the virtual reality equipment to move along with the user according to the current virtual gesture data, judging whether the current virtual gesture data meets preset action quality requirements, determining the current virtual gesture data into the re-recorded gesture data and generating action re-recorded guide information corresponding to the re-recorded gesture data if the current virtual gesture data does not meet the action quality requirements, displaying the action re-recorded guide information to the user based on the virtual reality equipment so as to guide the user to re-execute a re-recorded task segment corresponding to the re-recorded gesture data in the preset task, and storing the real gesture data corresponding to the virtual gesture data in the process of the user executing the preset task to meet the action quality requirements. The method comprises the steps of generating prompt information corresponding to the re-recording gesture data after the re-recording task segments corresponding to the re-recording gesture data in a preset task are guided to be re-executed by a user based on virtual reality equipment, wherein the prompt information is used for representing that the current gesture of a virtual human model is in a disqualified gesture, driving the virtual human model in the virtual reality equipment to move again according to the virtual gesture data corresponding to the re-recording task segments, and displaying the prompt information in the virtual reality equipment when the virtual human model in the virtual reality equipment moves to the gesture corresponding to the re-recording gesture data. Preferably, real gesture data corresponding to virtual gesture data meeting action quality requirements in the process of executing a preset task by a user are stored to complete gesture data acquisition, and the method comprises the steps of identifying task segments repeatedly executed in the process of executing the preset task by the user to obtain at least one repeated task