CN-122018681-A - Equipment operation real-time guiding teaching method and system based on augmented reality
Abstract
The invention discloses a device operation real-time guiding teaching method and system based on augmented reality, and belongs to the technical field of augmented reality. The method comprises the steps of determining a spatial semantic association relation based on equipment spatial information, generating an operation step decomposition sequence by combining operation task description, performing spatial registration on operator visual angle information and the spatial semantic association relation, determining a visible operation part set, generating a visual angle adaptive guide anchor point based on a spatial shielding relation between a target operation part and the visible part, calculating a spatial deviation vector according to the guide anchor point and operator hand pose information, generating a real-time operation correction instruction which is overlapped and displayed in an augmented reality visualization mode, continuously monitoring the spatial proximity degree of a hand and the target operation part, identifying an operation completion state and updating an execution position. The invention realizes the instantaneity and the accuracy of operation guidance and improves the training efficiency of equipment operation.
Inventors
- NI GUOFU
- HOU YUJIE
- ZHAO XIA
Assignees
- 宜科树人(苏州)教育科技有限公司
Dates
- Publication Date
- 20260512
- Application Date
- 20260114
Claims (10)
- 1. The equipment operation real-time guiding teaching method based on augmented reality is characterized by comprising the following steps of: Determining a spatial semantic association relation of equipment based on equipment spatial information, and generating a decomposition sequence of operation steps according to the operation task description and the spatial semantic association relation; performing spatial registration on the visual angle information of an operator acquired in real time and the spatial semantic association relation, determining a visible operation part set in a current visual field range, and generating a visual angle adaptive guide anchor point based on a spatial shielding relation between a target operation part of a current step in the decomposition sequence and the visible operation part set; based on the guide anchor point and the real-time acquired hand pose information of the operator, calculating a space deviation vector between the hand of the operator and a target operation component corresponding to the guide anchor point, and generating a real-time operation correction instruction according to the space deviation vector, wherein the real-time operation correction instruction is overlapped on a display position of a guide anchor point mark in an augmented reality visualization mode; continuously monitoring the spatial proximity degree of the hand pose information and the target operation part corresponding to the guide anchor point, and identifying the operation completion state when detecting that the hand of an operator enters the preset interaction space range of the target operation part; And updating the current execution position of the decomposition sequence according to the operation completion state, and redefining the presentation contents of the target operation part and the augmented reality guide information based on the updated current execution position.
- 2. The method of claim 1, wherein determining a spatial semantic association of a device based on device spatial information, generating a decomposition sequence of operational steps from an operational task description and the spatial semantic association, comprises: Performing functional attribute labeling on each operable component in the equipment space information, mapping each operable component to a predefined operation function class, and extracting the space position of each operable component as a space coordinate feature; calculating a spatial distance relation and a direction relation between the operable components based on the spatial coordinate characteristics, and establishing spatial topological connection between the operable components according to the spatial distance relation and the direction relation; Performing association mapping on the functional attribute labels and the space topological connection to form the space semantic association relation; analyzing a target state defined in the operation task description, and decomposing the target state into a plurality of sub-target states; Matching target operation components corresponding to each sub-target state according to the functional attribute labels in the spatial semantic association relationship, and determining the operation sequence among the target operation components according to the spatial topological connection; and generating a decomposition sequence of the operation steps based on the operation sequence and the functional attribute labels of each target operation component.
- 3. The method of claim 1, wherein spatially registering the operator's perspective information acquired in real-time with the spatial semantic association, determining a set of visible operating components in a current field of view, and generating a perspective-adapted guide anchor based on a spatial occlusion relationship of a target operating component of a current step in the decomposition sequence with the set of visible operating components, comprises: Extracting space viewpoint coordinates and a view direction vector in view angle information of the operator, and determining a view projection area based on the space viewpoint coordinates and the view direction vector; Acquiring the spatial positions of the operable components recorded in the spatial semantic association relationship, judging the spatial inclusion relationship between the spatial positions of the operable components and the visual field projection area, and screening out candidate visible components with the spatial positions falling into the visual field projection area; For each candidate visual component, determining a set of operational components visible in the current field of view based on a spatial relationship between the spatial viewpoint coordinates and a spatial location of the candidate visual component; Acquiring a target operation part of the current step from the decomposition sequence, and determining the spatial position of an associated visible part which has spatial association with the target operation part and belongs to the visible operation part set based on the spatial topological connection relation in the spatial semantic association relation when the target operation part does not belong to the visible operation part set so as to generate a guiding anchor point of view adaptation; And when the target operation part belongs to the visible operation part set, generating the guiding anchor point of the visual angle adaptation according to the space position of the target operation part.
- 4. A method according to claim 3, wherein for each candidate visual component, determining a set of operational components visible in the current field of view based on a spatial relationship between the spatial viewpoint coordinates and a spatial location of the candidate visual component comprises: Traversing the candidate visual components, determining, for any of the candidate visual components, a line-of-sight vector pointing from the spatial viewpoint coordinates to a spatial location of the candidate visual component; acquiring spatial position and geometric shape information of other geometric components except the candidate visible component in the three-dimensional geometric structure of the equipment; Performing space intersection operation on the space positions and the geometric shape information of the sight line vector and the other geometric components, and judging whether the space intersection point is generated between the sight line vector and the other geometric components in the process of propagating from the space viewpoint coordinates to the space positions of the candidate visual components; When the result of the space intersection operation indicates that a space intersection point does not exist, determining that the visibility state of the candidate visible part is visible, otherwise, determining that the visibility state of the candidate visible part is invisible; And adding the candidate visible parts with visible states to a visible part temporary set, and determining the visible part temporary set as a visible operation part set in the current visual field range after traversing all candidate visible parts.
- 5. The method of claim 1, wherein calculating a spatial deviation vector between an operator's hand and a target operational component corresponding to the guide anchor based on the guide anchor and the real-time acquired hand pose information of the operator, generating a real-time operation correction instruction according to the spatial deviation vector, the real-time operation correction instruction being superimposed in an augmented reality visualization form on a display position of a guide anchor identifier, comprises: Extracting the space position of the target operation part from the guide anchor point, and extracting the current space coordinate and the hand gesture direction of the hand of the operator from the hand gesture information; calculating a spatial position deviation between the current spatial coordinates of the hand and the spatial position of the target operation component, calculating a posture angle deviation between the hand posture direction and a predefined operation posture direction required for completing the operation of the current step, and constructing the spatial deviation vector based on the spatial position deviation and the posture angle deviation; Generating path guiding information indicating the hand movement direction and the movement amplitude according to the position correction component of the space deviation vector, generating gesture adjusting information indicating the hand rotation direction and the rotation amplitude according to the gesture correction component of the space deviation vector, and combining the path guiding information and the gesture adjusting information to form the real-time operation correction instruction; and converting the real-time operation correction instruction into an augmented reality visualized element, and superposing the augmented reality visualized element on the display position of the guide anchor point mark.
- 6. The method of claim 1, wherein continuously monitoring the spatial proximity of the hand pose information to the target operational component corresponding to the guide anchor point, and when detecting that an operator hand enters a preset interaction space range of the target operational component, identifying an operation completion state comprises: Continuously acquiring the hand pose information updated in real time, and extracting real-time space coordinates of the hands of an operator from the hand pose information; Acquiring the space position of the target operation part from the guide anchor point, and determining a preset interaction space range of the target operation part based on the space position of the target operation part and the geometric boundary information of the target operation part; Calculating a space distance between the real-time space coordinates of the hand and the space position of the target operation component, and determining the space proximity degree of the hand pose information and the target operation component based on the space inclusion relation between the space distance and the real-time space coordinates of the hand relative to the preset interaction space range; When the real-time space coordinates of the hand fall into the preset interaction space range based on the space proximity degree, extracting hand action features in the hand pose information, and matching the hand action features with the predefined operation action mode of the current step; and identifying an operation completion state based on a matching result of the hand motion feature and the predefined operation motion mode of the current step.
- 7. The method of claim 6, wherein calculating a spatial distance between the real-time spatial coordinates of the hand and the spatial position of the target operational component and determining the spatial proximity of the hand pose information to the target operational component based on a spatial inclusion relationship of the spatial distance to the real-time spatial coordinates of the hand relative to the preset interaction spatial range comprises: calculating Euclidean space distance between real-time space coordinates of the hand and the space position of the target operation component; acquiring space boundary description information of the preset interaction space range, wherein the space boundary description information defines the geometric shape and range of the preset interaction space range in a three-dimensional space; judging the inclusion relation between points and space areas based on the real-time space coordinates of the hand and the space boundary description information, and determining whether the real-time space coordinates of the hand are located in the preset interaction space range or not; Calculating the shortest distance from the real-time space coordinates of the hand to the boundary of the preset interaction space range, determining a distance proximity index according to the Euclidean space distance, and determining a space position relationship index according to the inclusion relationship judgment result of the points and the space region and the shortest distance; And carrying out quantization processing on the distance proximity index and the spatial position relation index to obtain the spatial proximity of the hand pose information and the target operation part.
- 8. An augmented reality-based device-operated real-time guidance teaching system for implementing the method of any one of claims 1-7, comprising: The first unit is used for determining the spatial semantic association relation of the equipment based on the equipment spatial information and generating a decomposition sequence of the operation steps according to the operation task description and the spatial semantic association relation; the second unit is used for carrying out spatial registration on the visual angle information of the operator acquired in real time and the spatial semantic association relation, determining a visible operation part set in the current visual field range, and generating a guide anchor point of visual angle adaptation based on the spatial shielding relation between the target operation part of the current step in the decomposition sequence and the visible operation part set; the third unit is used for calculating a space deviation vector between the hand of the operator and a target operation component corresponding to the guide anchor point based on the guide anchor point and the hand pose information of the operator acquired in real time, and generating a real-time operation correction instruction according to the space deviation vector, wherein the real-time operation correction instruction is overlapped on the display position of the guide anchor point mark in an augmented reality visualization mode; A fourth unit, configured to continuously monitor a spatial proximity degree of the hand pose information to the target operation component corresponding to the guide anchor point, and identify an operation completion state when detecting that the hand of the operator enters a preset interaction spatial range of the target operation component; And a fifth unit for updating the current execution position of the decomposition sequence according to the operation completion state, and redefining the presentation contents of the target operation part and the augmented reality guide information based on the updated current execution position.
- 9. An electronic device, comprising: A processor; A memory for storing processor-executable instructions; Wherein the processor is configured to invoke the instructions stored in the memory to perform the method of any of claims 1 to 7.
- 10. A computer readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the method of any of claims 1 to 7.
Description
Equipment operation real-time guiding teaching method and system based on augmented reality Technical Field The invention relates to the technical field of augmented reality, in particular to a device operation real-time guiding teaching method and system based on augmented reality. Background With the rapid development of the fields of industrial manufacture and equipment maintenance, the operation and maintenance of complex equipment place increasing demands on the expertise of operators. Traditional equipment operation training is mainly performed by means of paper manuals, video courses or master bands apprentice, and the methods have a plurality of limitations in practical application. In recent years, the development of augmented reality technology provides a new solution for equipment operation training, and by superimposing virtual guide information into a real operation scene, the learning efficiency and operation accuracy of an operator can be effectively improved. The current augmented reality teaching system has been applied to a certain extent in the industrial field, and basically realizes that the operation instructions are presented on the device in the form of virtual labels. The operator can wear the augmented reality glasses or use the mobile device to view the overlapped virtual information, and the device operation tasks are completed according to the preset operation steps. Such systems typically employ image recognition or spatial localization techniques to achieve alignment of the virtual information with the real device and present the corresponding guidance content to the operator according to a predefined operational flow. However, existing augmented reality operation guidance techniques still suffer from a number of drawbacks. In the prior art, a guiding mode with a fixed visual angle is generally adopted, the influence of the actual visual angle change of an operator on visible parts is not fully considered, when the operator observes the equipment from different angles, certain target operation parts are blocked by other structures and cannot be seen, and at the moment, guiding information is still displayed at the blocked position, so that the operator is difficult to understand the instruction intention, and the operation efficiency and accuracy are affected. In the prior art, a real-time feedback mechanism for the actual operation action of an operator is lacking in the operation guiding process, usually only preset operation steps are displayed in sequence, dynamic correction guidance cannot be provided according to the actual deviation between the hand position of the operator and a target component, the operator is easy to generate position deviation or action error when performing operation, and particularly in a precise operation scene, the problem of lacking of real-time correction can seriously affect the operation quality. In the prior art, the judgment of the completion state of the operation step mainly depends on time delay or manual confirmation, and the lack of intelligent recognition capability based on the spatial position relationship leads to failure in accurately judging whether an operator really completes the current step, and the operator possibly jumps to the next step when the operation is not completed, or stays in the current step after the operation is completed, so that the consistency and the effectiveness of the whole teaching flow are affected. Disclosure of Invention The embodiment of the invention provides a device operation real-time guiding teaching method and system based on augmented reality, which at least can solve part of problems existing in the prior art. In a first aspect of an embodiment of the present invention, there is provided an augmented reality-based device operation real-time guidance teaching method, including: Determining a spatial semantic association relation of equipment based on equipment spatial information, and generating a decomposition sequence of operation steps according to the operation task description and the spatial semantic association relation; performing spatial registration on the visual angle information of an operator acquired in real time and the spatial semantic association relation, determining a visible operation part set in a current visual field range, and generating a visual angle adaptive guide anchor point based on a spatial shielding relation between a target operation part of a current step in the decomposition sequence and the visible operation part set; based on the guide anchor point and the real-time acquired hand pose information of the operator, calculating a space deviation vector between the hand of the operator and a target operation component corresponding to the guide anchor point, and generating a real-time operation correction instruction according to the space deviation vector, wherein the real-time operation correction instruction is overlapped on a display position of a guide