KR-20260063399-A - METHOD FOR CONTROLLING MOBILE ROBOT USING GESTURE AND MOBILE ROBOT RECOGNIZING GESTURE
Abstract
A method for controlling a mobile robot using intuitive and user-friendly gestures is disclosed. The disclosed method for controlling a mobile robot using gestures includes the steps of: acquiring an image of a user who is making a gesture pointing to a destination of the mobile robot using an upper limb; generating a 3D model of the user's 3D posture from the image; estimating the location of the destination using a plurality of key points placed on the upper limb of the 3D model; and controlling movement to the destination.
Inventors
- 김재호
- 이동훈
- 김세중
- 원주연
Assignees
- 세종대학교산학협력단
Dates
- Publication Date
- 20260507
- Application Date
- 20241030
Claims (10)
- A step of acquiring an image of a user making a gesture pointing to the destination of a mobile robot using their upper limbs; A step of generating a 3D model of the user's 3D posture from the above image; A step of estimating the location of the destination using a plurality of key points placed on the upper limb of the above 3D model; and A step of controlling movement to the above destination A mobile robot control method using a gesture including
- In Article 1, The above destination is The destination located on the floor Mobile robot control method using gestures.
- In Paragraph 2, The step of estimating the location of the above destination A step of calculating a straight line passing through key points placed at different joints among the key points placed on the upper limbs; and Step of estimating the intersection point of the above straight line and the above floor surface as the location of the above destination A mobile robot control method using a gesture including
- In Paragraph 3, The step of calculating the above straight line is Calculating a straight line passing through key points placed at the shoulder and wrist joints of the upper limbs. Mobile robot control method using gestures.
- In Paragraph 2, The step of calculating the above straight line is A step of determining a plurality of key points to estimate the location of the destination among the key points placed on the upper limb according to the angle of the elbow joint of the upper limb; and Step of calculating a straight line passing through the above-determined key point A mobile robot control method using a gesture including
- In Paragraph 5, The step of determining the above plurality of key points is When the angle of the elbow joint of the upper limb is less than or equal to a critical angle, the key point placed at the wrist joint of the upper limb and the key point placed at the finger joint of the upper limb are determined as key points for estimating the location of the destination. Mobile robot control method using gestures.
- In Paragraph 6, The finger joints of the upper limb mentioned above One of the joints of the index finger Mobile robot control method using gestures.
- A step of acquiring an image of a user making a gesture pointing to the destination of a mobile robot using their upper limbs; A step of estimating a three-dimensional posture of the upper limb using the above image; A step of estimating the location of the destination located on the floor surface using the above three-dimensional posture; and A step of controlling movement to the above destination A mobile robot control method using a gesture including
- In Paragraph 8, The step of controlling the above movement Recognizing the number of fingers spread out from the hand of the upper limb, and controlling the movement speed of the mobile robot Mobile robot control method using gestures.
- A camera that generates an image of a user making a gesture using their upper limbs to point toward the destination of a mobile robot; Memory; and It includes a processor electrically connected to the above memory, The above processor A 3D model of the user's 3D posture is generated from the above image, and The location of the destination is estimated using a plurality of key points placed on the upper limb of the above 3D model, and Controlling movement to the above destination A mobile robot that recognizes gestures.
Description
Method for controlling a mobile robot using gestures and a mobile robot recognizing gestures The present invention relates to a method for controlling a mobile robot using a gesture and a mobile robot that recognizes a gesture, and more specifically, to a method for setting a destination of a mobile robot using a gesture and controlling the mobile robot, and a mobile robot that recognizes a destination from a gesture. With the Fourth Industrial Revolution, robot technology is gaining prominence, and various robots and the services provided by them, such as factory automation robots, restaurant serving robots, and logistics center transport robots, are being developed. Additionally, vision sensors, such as cameras, are generally installed on robots to enable them to control their own movements and monitor their operation. However, depending on the situation, there may be cases where the user directly controls the robot, in which case a method of controlling the robot using a separate input device such as a mobile terminal or keyboard is utilized. Robot control methods utilizing such input devices have limitations in controlling multiple robots simultaneously and are not user-friendly; therefore, robot control methods using user gestures, which are more intuitive to the user, are preferred. Related prior art includes patent documents, Korean Published Patent No. 2011-0055062 and No. 2024-0142297 and Korean Registered Patent No. 10-2408327 and No. 10-2261797, and non-patent document "Lifting from the Deep: Convolutional 3D Pose Estimation from a Single Image, D Tome, Christopher Russell and L Agapito, CVPR 2017 Proceedings, pp.2500-2509". FIG. 1 is a drawing for explaining a mobile robot that recognizes a gesture according to an embodiment of the present invention. FIG. 2 is a diagram illustrating a mobile robot control method using a gesture according to an embodiment of the present invention. Figure 3 is a diagram showing key points placed on the upper limbs of a three-dimensional model. FIG. 4 is a diagram illustrating a mobile robot control method using gestures according to another embodiment of the present invention. The present invention is susceptible to various modifications and may have various embodiments, and specific embodiments are illustrated in the drawings and described in detail. However, this is not intended to limit the invention to specific embodiments, and it should be understood that the invention includes all modifications, equivalents, and substitutions that fall within the spirit and scope of the invention. Similar reference numerals have been used for similar components in the description of each drawing. Hereinafter, embodiments according to the present invention will be described in detail with reference to the attached drawings. FIG. 1 is a drawing for explaining a mobile robot that recognizes a gesture according to an embodiment of the present invention. Referring to FIG. 1, a mobile robot according to one embodiment of the present invention includes a camera (110), a memory (120), and a processor (130), and may further include means of movement such as wheels for moving the mobile robot. The camera (110) generates an image of the user (140). The user (140) can make a gesture pointing to the destination (150) of the mobile robot using an upper limb (141). The user can make a gesture pointing to the destination (150) using a preset upper limb, either the right upper limb or the left upper limb, and this gesture may be a gesture in which the upper limb is directed toward the destination (150) by the user (140) extending the upper limb (140). A processor (130) electrically connected to a memory (120) detects a user (140) in an image generated by a camera (110). The processor (130) can detect the user (140) using various object detection algorithms. In order to easily detect a user controlling a mobile robot in a crowded environment, the user (140) can make a gesture for user recognition, and the processor (130) can recognize and detect the person making the gesture for user recognition as the user (140). The processor (130) estimates a three-dimensional posture of the upper limb (140) of the user (140) using an image of the user (140), and can generate a three-dimensional model of the three-dimensional posture of the user (140) using the estimation result. The three-dimensional model representing the three-dimensional posture of the user (140) may be a skeleton model composed of keypoints and edges connecting these keypoints. Three-dimensional coordinates are assigned to the keypoints, and the processor (130) can generate the three-dimensional model using a deep learning model that generates a three-dimensional model from the two-dimensional posture of the user (140) included in the image, as an example. The location of keypoints placed in the 3D model may correspond to the location of keypoints in the training 3D model included in the training data used to train the deep learning m