CN-122020241-A - Robot simulation data generalization method, device and storage medium
Abstract
The embodiment of the application provides a robot simulation data generalization method, a device, a storage medium and a computer program product. The method comprises the steps of obtaining scene parameters of a target task in a robot simulation environment, decomposing the target task into a plurality of subtasks, associating each subtask with at least one specific operation object, selecting a matched source demonstration segment from a pre-stored robot demonstration data segment base according to the task type of the subtask and the associated specific operation object, based on the difference between the actual pose of the target operation object associated with the current subtask and the demonstration pose of the target operation object in the selected source demonstration segment, carrying out space coordinate transformation taking the target operation object as the center on the robot movement track in the source demonstration segment, injecting noise to generate a preliminary generalization track, and carrying out physical verification and optimization on all the preliminary generalization tracks to obtain a simulation track data set for simulation learning training of the robot.
Inventors
- LIU XIANGBIAO
- LIU HAO
- WANG LONG
- WU DINGGUI
- DONG ZHENGRONG
Assignees
- 中科云谷科技有限公司
Dates
- Publication Date
- 20260512
- Application Date
- 20251231
Claims (10)
- 1. A method for generalizing simulation data of a robot, the method comprising: In a robot simulation environment, acquiring scene parameters of a target task, wherein the scene parameters at least comprise an object identifier and an initial pose of an operation object; decomposing the target task into a plurality of sub-tasks which are sequentially executed by the robot, wherein each sub-task is at least associated with one specific operation object; for each subtask, selecting a matched source demonstration fragment from a pre-stored robot demonstration data fragment library according to the task type of the subtask and the associated specific operation object; Based on the difference between the actual pose of the target operation object associated with the current subtask and the demonstration pose of the target operation object in the selected source demonstration segment, carrying out space coordinate transformation taking the target operation object as the center on the motion track of the robot in the source demonstration segment, and injecting noise to generate a preliminary generalization track adapted to the current subtask; and performing physical verification and track optimization on the preliminary generalized tracks of all subtasks to obtain a simulation track data set for the simulation learning training of the robot.
- 2. The generalization method of claim 1, further comprising a step of constructing a robotic presentation data segment library, the step of constructing comprising: collecting original demonstration data when a user operates the robot to complete a task; Identifying task boundary signals triggered based on state change of an operation object or a preset instruction in the original demonstration data; And dividing the original demonstration data into a plurality of independent demonstration data fragments according to the task boundary signals, extracting an operation object identifier, a demonstration pose and a robot motion track associated with each fragment, and storing the operation object identifier, the demonstration pose and the robot motion track into a robot demonstration data fragment library.
- 3. The generalization method of claim 2, wherein the segmenting the original presentation data into a plurality of independent presentation data segments according to the task boundary signal and extracting the operation object identifier, the presentation pose and the robot motion trail associated with each segment comprises: and establishing a semantic binding relation between the demonstration data segment and an operation object associated with the demonstration data segment for each demonstration data segment, and performing coordinate transformation on a robot motion track of the demonstration data segment based on an object center coordinate system of the associated operation object so as to establish a geometric binding relation between the demonstration data segment and the associated operation object.
- 4. The method of claim 1, wherein selecting a matching source presentation snippet from a pre-stored robot presentation data snippet library, comprises: Executing a selection strategy based on the task type, the associated object attribute and the current scene constraint of the subtask, wherein the selection strategy at least comprises one of a nearest neighbor object matching strategy, a robot distance optimizing strategy and a success rate weighting strategy; and according to the calculation result of the selection strategy, determining at least one source demonstration fragment with the highest matching degree for each subtask as a selection result.
- 5. The method of claim 1, wherein the spatial coordinate transformation centered on the target object of operation comprises: determining a rigid body transformation matrix between the demonstration pose of the target operation object in the source demonstration fragment and the actual pose of the target operation object in the current subtask; And converting the motion track of the robot in the source demonstration fragment from a coordinate system taking the demonstration object as a center to a new coordinate system taking the current target operation object as a center through the rigid transformation matrix to obtain a redirection track.
- 6. The generalization method of claim 1, wherein the physical verification comprises: Performing verification in a simulation environment aiming at the generated preliminary generalization track; Screening one or more tracks meeting preset quality standards based on the verification result, wherein the preset quality standards at least comprise task completion degree, collision-free performance of the motion track and motion smoothness.
- 7. The generalization method of claim 1, the generalization method is characterized by further comprising the following steps: And carrying out standardized packaging and metadata index addition on the simulation track data subjected to physical verification and track optimization, and constructing and forming a standardized robot simulation learning simulation training data set.
- 8. A robotic simulation data generalization apparatus, comprising: A memory configured to store instructions; A processor configured to invoke the instructions from the memory and when executing the instructions is capable of implementing the robot simulation data generalization method according to any one of claims 1 to 7.
- 9. A machine-readable storage medium having instructions stored thereon, which when executed by a processor cause the processor to be configured to perform the robot simulation data generalization method of any of claims 1 to 7.
- 10. A computer program product comprising a computer program, characterized in that the computer program, when being executed by a processor, implements the robot simulation data generalization method according to any one of claims 1 to 7.
Description
Robot simulation data generalization method, device and storage medium Technical Field The present application relates to the field of robot technologies, and in particular, to a method and apparatus for generalizing simulation data of a robot, a storage medium, and a computer program product. Background Robotics is rapidly evolving towards complex physical interactions and self-intelligence, which has driven unprecedented demands for large-scale, high-fidelity simulation data. Under the intelligent model of the robot, the robot needs to learn the physical laws and execute complex tasks through active and continuous multi-modal interaction with the environment, so that the simulation platform not only needs to provide realistic vision, but also needs to accurately simulate multiple complex factors such as physical interaction, ontology dynamics, sensor noise, environmental behavior and the like. However, the current widely adopted simulation data generation mode is severely dependent on manual parameter setting and a large number of renaturation experiments, and when facing to a highly heterogeneous and dynamically-changed real task and long-tail scene, the bottleneck that the data generation efficiency is low, the cost is high, and the system is difficult to cover a large number of environment variables is exposed. The method is usually tightly coupled with a specific simulation environment or task setting, and data is often required to be redesigned and generated when the task is replaced or the model is adjusted, so that the multiplexing rate is low, the updating period is long, and the flexibility is poor. The generated data often lacks sufficient generalization capability, is difficult to support the model to transfer and learn across scenes, easily causes the perception deviation, action misalignment and interaction disorder of the training model in actual deployment, and seriously influences the adaptability, safety and task success rate. Disclosure of Invention The embodiment of the application aims to provide a robot simulation data generalization method, a device, a storage medium and a computer program product, which are used for solving the technical problems that the existing simulation data generation method is low in efficiency and high in cost, the generated data is lack of diversity and is strongly coupled with a specific task or scene, the robot model is difficult to generalize to a new environment or different tasks effectively, and the performance of the robot model in the real world is reduced, and a significant gap exists between simulation and reality. In order to achieve the above object, a first aspect of the present application provides a robot simulation data generalization method, the method comprising: In a robot simulation environment, acquiring scene parameters of a target task, wherein the robot simulation data generalization scene parameters at least comprise object identification and initial pose of an operation object; Decomposing the robot simulation data generalization target task into a plurality of sub-tasks which are sequentially executed by the robot, wherein each sub-task is at least associated with a specific operation object; For each subtask, according to the task type of the robot simulation data generalization subtask and the associated specific operation object, selecting a matched source demonstration fragment from a pre-stored robot demonstration data fragment library; Based on the difference between the actual pose of the target operation object associated with the current subtask and the demonstration pose of the robot simulation data generalization target operation object in the selected source demonstration segment, performing space coordinate transformation with the robot simulation data generalization target operation object as a center on the robot motion track in the source demonstration segment, and injecting noise to generate a preliminary generalization track adapted to the current subtask; And performing physical verification and track optimization on the preliminary generalized tracks of all subtasks to obtain simulation track data for simulation learning training of the robot. The method comprises the steps of constructing a robot demonstration data segment library, wherein the step of constructing the robot simulation data comprises the steps of collecting original demonstration data when a user operates a robot to complete a task, identifying task boundary signals triggered based on state changes of operation objects or preset instructions in the original demonstration data of the robot simulation data generalization, segmenting the original demonstration data of the robot simulation data into a plurality of independent demonstration data segments according to the task boundary signals of the robot simulation data generalization, extracting operation object identifications, demonstration poses and robot motion tracks associated with each segm