Search

CN-115424714-B - Robot interaction method, device, equipment and storage medium based on virtual reality

CN115424714BCN 115424714 BCN115424714 BCN 115424714BCN-115424714-B

Abstract

The application relates to a robot interaction method, device, equipment and storage medium based on virtual reality. The method comprises the steps of obtaining operation information, controlling a robot to execute corresponding operation on a virtual object according to the operation information, performing virtual-real registration on at least one display device respectively to determine the pose of each display device relative to a real three-dimensional space, respectively generating operation animations matched with each pose according to the pose corresponding to each display device in the process of controlling the robot to execute the corresponding operation, wherein the operation animations reflect the state change process of the virtual object after the corresponding operation is received under the corresponding pose, and transmitting the operation animations corresponding to each display device to the corresponding display device for display. The method can enable a plurality of users to see operation animations matched with the pose of the user in different poses, thereby improving learning efficiency.

Inventors

  • Request for anonymity
  • Request for anonymity
  • Request for anonymity
  • Request for anonymity

Assignees

  • 苏州微创畅行机器人有限公司

Dates

Publication Date
20260508
Application Date
20220907

Claims (12)

  1. 1. A robot interaction method based on virtual reality, the method comprising: Acquiring operation information, and controlling a robot to execute corresponding operation on a virtual object according to the operation information; performing virtual-real registration on at least one display device respectively to determine the pose of each display device relative to a real three-dimensional space; In the process of controlling the robot to execute corresponding operation, respectively generating operation animations matched with all the poses according to the poses corresponding to all the display devices, wherein the operation animations reflect the state change process of the virtual object after receiving the corresponding operation under the corresponding poses; transmitting the operation animations respectively corresponding to the display devices to the corresponding display devices for display; The virtual-real registration is performed on at least one display device to determine the pose of each display device relative to the real three-dimensional space, including: Performing virtual-real registration on at least one display device according to a plurality of registration markers posted in different orientations to determine registration markers respectively associated with the display devices; calculating the pose of each display device relative to a real three-dimensional space according to the azimuth of the registration marker respectively associated with each display device; The display device includes a camera, and performs virtual-real registration on at least one display device according to a plurality of registration identifiers posted in different orientations to determine the registration identifiers respectively associated with the display devices, including: and acquiring images shot by cameras of the display devices, identifying front marks of registration identifiers in the images shot by the display devices, and determining the registration identifier pointed by the front marks as the registration identifier associated with the corresponding display device.
  2. 2. The method of claim 1, wherein the display device is provided with supplemental markers at a periphery thereof, wherein the virtual-to-real registration of at least one display device based on the plurality of registration markers posted to different orientations to determine the registration markers associated with each display device, respectively, comprises: Acquiring images shot by cameras of the display devices, and determining the non-occluded display devices and the occluded display devices based on the content in the images; carrying out registration identifier identification processing on an image corresponding to a non-occluded display device to obtain a first front mark, and taking a registration identifier pointed by the first front mark as a registration identifier associated with the non-occluded display device; And carrying out supplementary marker identification processing on the image corresponding to the blocked display device to obtain a second front marker, determining a target display device where the supplementary marker pointed by the second front marker is located, and if the target display device is not blocked, using a registration marker associated with the target display device as the registration marker associated with the blocked display device.
  3. 3. The method according to claim 2, wherein the performing registration identifier recognition processing on the image corresponding to the unoccluded display device to obtain the first front mark includes: Performing image threshold segmentation on the image to obtain a plurality of connected domains; and performing perspective transformation on the connected domain, wherein the number and the size of the corner points of the connected domain meet preset conditions, so as to obtain a first front mark.
  4. 4. The method of claim 1, wherein the display devices comprise unobstructed display devices, and wherein calculating the pose of each display device relative to the real three-dimensional space based on the orientation of the registration markers associated with each display device, respectively, comprises: Acquiring an image shot by a non-occluded display device, and identifying first two-dimensional position information of corner points of the registration marker in the image shot by the non-occluded display device; Determining a first position conversion matrix according to the first two-dimensional position information and the first three-dimensional position information of the corner points of the registration markers in the real three-dimensional space; And determining the pose of the unoccluded display device relative to the real three-dimensional space according to the first position conversion matrix.
  5. 5. The method of claim 4, wherein determining a first position transformation matrix from the first two-dimensional position information and the first three-dimensional position information of the corner points of the registration markers in real three-dimensional space comprises: Establishing a three-dimensional coordinate system by taking the center of a camera of a non-shielded display device as an origin and taking the plane of the registration marker as an XY plane, and obtaining first three-dimensional position information of the corner point of the registration marker in the three-dimensional coordinate system; And determining a first position conversion matrix according to the conversion relation between the three-dimensional coordinate system and the two-dimensional coordinate system, the first two-dimensional position information and the first three-dimensional position information.
  6. 6. The method of claim 4, wherein the display devices further comprise occluded display devices, the computing pose of each display device relative to a real three-dimensional space based on the orientation of registration markers associated with each display device, respectively, further comprising: Acquiring an image shot by the shielded display device, and identifying second two-dimensional position information of corner points of the supplementary marker in the image shot by the shielded display device; determining a second position conversion matrix according to the second two-dimensional position information and the second three-dimensional position information of the corner points of the supplementary marker in the real three-dimensional space; And determining the pose of the shielded display device relative to the real three-dimensional space based on the first position conversion matrix and the second position conversion matrix.
  7. 7. The method of claim 6, wherein determining a second position conversion matrix from the second two-dimensional position information and the second three-dimensional position information of the corner points of the supplemental markers in real three-dimensional space comprises: Establishing a three-dimensional coordinate system by taking the center of a camera of the shielded display device as an origin and taking the plane of the supplementary marker as an XY plane, and obtaining second three-dimensional position information of the corner point of the supplementary marker in the three-dimensional coordinate system; and determining a second position conversion matrix according to the conversion relation between the three-dimensional coordinate system and the two-dimensional coordinate system, the second two-dimensional position information and the second three-dimensional position information.
  8. 8. The method of claim 1, wherein calculating the pose of each display device relative to the real three-dimensional space based on the orientation of the registration markers associated with each display device, respectively, comprises: and determining the azimuth of the registration marker respectively associated with each display device, and determining the pose of each display device relative to a real three-dimensional space by combining the two-dimensional coordinates and the three-dimensional coordinates of the same point in the azimuth and the conversion relation between the two-dimensional coordinate system and the three-dimensional coordinate system.
  9. 9. A virtual reality-based robotic interaction device, the device comprising: The acquisition module is used for acquiring operation information and controlling the robot to execute corresponding operation on the virtual object according to the operation information; The registration module is used for carrying out virtual-real registration on at least one display device respectively so as to determine the pose of each display device relative to a real three-dimensional space; The operation animation generation module is used for respectively generating operation animations matched with all the poses according to the poses corresponding to all the display devices in the process of controlling the robot to execute corresponding operations, wherein the operation animations reflect the state change process of the virtual object after receiving the corresponding operations under the corresponding poses; the display module is used for transmitting the operation animations respectively corresponding to the display devices to the corresponding display devices for display; The registration module is also used for carrying out virtual-real registration on at least one display device according to a plurality of registration identifiers posted in different directions so as to determine the registration identifiers respectively associated with the display devices; calculating the pose of each display device relative to a real three-dimensional space according to the azimuth of the registration marker respectively associated with each display device; the display device comprises cameras, the registration module is further used for acquiring images shot by the cameras of the display devices, identifying front marks of registration identifiers in the images shot by the display devices, and determining the registration identifier pointed by the front marks as the registration identifier associated with the corresponding display device.
  10. 10. The device of claim 9, wherein the display device comprises a camera and a supplemental identifier is disposed on a periphery of the display device, the registration module further configured to obtain images captured by the camera of each of the display devices, determine a non-occluded display device and an occluded display device based on content in the images; carrying out registration identifier identification processing on an image corresponding to a non-occluded display device to obtain a first front mark, and taking a registration identifier pointed by the first front mark as a registration identifier associated with the non-occluded display device; And carrying out supplementary marker identification processing on the image corresponding to the blocked display device to obtain a second front marker, determining a target display device where the supplementary marker pointed by the second front marker is located, and if the target display device is not blocked, using a registration marker associated with the target display device as the registration marker associated with the blocked display device.
  11. 11. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any one of claims 1 to 8 when the computer program is executed.
  12. 12. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 8.

Description

Robot interaction method, device, equipment and storage medium based on virtual reality Technical Field The present application relates to the field of virtual reality technologies, and in particular, to a robot interaction method, apparatus, device, and storage medium based on virtual reality. Background With the development of computer technology, virtual reality technology, which is a computer simulation system that can create and experience a virtual world, has emerged, and uses a computer to generate a simulated environment into which a user is immersed. The method is widely applied to the fields of industrial manufacturing industry, medical industry, aerospace military industry, real estate and the like. In a scene where a virtual object needs to be displayed or a specified operation is performed on the virtual object, the virtual reality-based interaction is often involved in observing the change of the virtual object, and a risk-free operation environment is provided according to the virtual reality technology, so that the safety risk is reduced. The traditional interaction mode is that a user usually executes operation on a virtual object, the virtual object gives corresponding feedback according to the operation, the feedback generally shows the change of the virtual object based on the visual angle of an operator, the mode can only be the interaction between the operator and the virtual object, other viewers cannot participate in the interaction, and the mode has the problem of low interaction efficiency. Disclosure of Invention Based on the above, it is necessary to provide a robot interaction method, device, equipment and storage medium based on virtual reality, which can enable multiple users to see operation animations matching with the pose of the user in different poses, thereby improving interaction efficiency. In a first aspect, the present application provides a robot interaction method based on virtual reality, the method comprising: acquiring operation information, and controlling the robot to execute corresponding operation on the virtual object according to the operation information; performing virtual-real registration on at least one display device respectively to determine the pose of each display device relative to a real three-dimensional space; In the process of controlling the robot to execute corresponding operation, respectively generating operation animations matched with all the poses according to the poses corresponding to all the display devices, wherein the operation animations reflect the state change process of the virtual object after receiving the corresponding operation under the corresponding poses; and transmitting the operation animations respectively corresponding to the display devices to the corresponding display devices for display. In one embodiment, performing virtual-real registration on at least one display device to determine a pose of each display device relative to a real three-dimensional space includes: Performing virtual-real registration on at least one display device according to a plurality of registration markers posted in different orientations to determine registration markers respectively associated with the display devices; And calculating the pose of each display device relative to the real three-dimensional space according to the positions of the registration markers respectively associated with each display device. In one embodiment, the display device includes a camera, and the periphery of the display device is provided with a supplementary identifier, and according to a plurality of registration identifiers posted in different orientations, virtual-real registration is performed on at least one display device to determine the registration identifier associated with each display device, including: Acquiring images shot by cameras of all display devices, and determining the display devices which are not blocked and the display devices which are blocked based on the content in the images; Carrying out registration identifier identification processing on an image corresponding to the display device which is not shielded to obtain a first front mark, and taking a registration identifier pointed by the first front mark as a registration identifier associated with the display device which is not shielded; And carrying out supplementary marker identification processing on the image corresponding to the blocked display device to obtain a second front marker, determining the target display device where the supplementary marker pointed by the second front marker is located, and if the target display device is not blocked, using the registration marker associated with the target display device as the registration marker associated with the blocked display device. In one embodiment, performing registration identifier recognition processing on an image corresponding to a display device that is not blocked to obtain a first front mark includes: Perf