US-20260124763-A1 - ROBOT AND CONTROL METHOD THEREFOR
Abstract
A robot is provided. The robot includes a camera, a depth sensor, a memory, and a processor configured to perform an interaction with a first user with a highest degree of interest from among a plurality of users present in vicinity of the robot, obtain gazing information of the plurality of users while performing the interaction with the first user, and obtain distance information of the plurality of users, determine an engagement level of the first user for the interaction by using gazing information and distance information of the first user from among the plurality of users, determine a degree of interest of another user by using gazing information and distance information of the first user and the another user from among the plurality of users, end the interaction with first user, and perform an interaction with the another user based on the degree of interest of the another user.
Inventors
- Heungwoo HAN
- Moogeun SONG
- Sangyun Lee
- Yujung LEE
Assignees
- SAMSUNG ELECTRONICS CO., LTD.
Dates
- Publication Date
- 20260507
- Application Date
- 20251231
- Priority Date
- 20201211
Claims (20)
- 1 . A robot comprising: memory storing at least one instruction; and at least one processor, wherein at least one processor, individually and/or collectively, is configured to execute the at least one instruction and to cause the robot to: perform an interaction with a user, identify information on another user and a degree of interest of the other user, and end, based on a degree of interest of the user being less than a threshold value, the interaction with user, and provide a service, same as a service provided to the user, to the other user based on the identified information .
- 2 . The robot of claim 1 , further comprises: a camera, and a depth sensor, wherein at least one processor, individually and/or collectively, is configured to cause the robot to: perform an interaction with the user with a highest degree of interest from among a plurality of users present in vicinity of the robot by using the camera and the depth sensor, obtain gazing information of the plurality of users by using the camera while performing the interaction with the user, and obtain distance information of the plurality of users by using the depth sensor, and identify an engagement level of the user for the interaction by using the gazing information and the distance information of the user from among the plurality of users, and identify a degree of interest of the other user by using gazing information and distance information of the other user from among the plurality of users.
- 3 . The robot of claim 1 , wherein at least one processor, individually and/or collectively, is configured to cause the robot to: match information on the other user and the degree of interest of the other user for each time interval with time information and store the matched information, and provide the service, same as the service provided to the user, to the other user based on the matched information.
- 4 . The robot of claim 1 , wherein at least one processor, individually and/or collectively, is configured to cause the robot to: store, while performing the interaction with the user, a first map comprising gazing information of the plurality of users, respectively, and a second map comprising distance information of the respective users in the memory, generate a fusion map by fusing the first map and the second map, and store the fusion map in the memory, and perform an interaction with the other user based on the fusion map.
- 5 . The robot of claim 4 , wherein the at least one processor is further configured to: accumulate the fusion map for each of respective times and store in the memory, and delete the first map and the second map from the memory according to time.
- 6 . The robot of claim 4 , wherein at least one processor, individually and/or collectively, is configured to cause the robot to: obtain an image comprising the plurality of users by using a camera, obtain gazing information of the plurality of users by analyzing the image, obtain distance information of the plurality of users by using a depth sensor, obtain density information of the plurality of users based on the distance information of the plurality of users, calculate a degree of interest of the plurality of users, respectively, based on the gazing information, the density information, and the distance information of the plurality of users, and identify the user with the highest degree of interest from among the plurality of users based on the degree of interest of the respective users.
- 7 . The robot of claim 6 , wherein the density information comprises density weight values that corresponds to the respective users, and wherein at least one processor, individually and/or collectively, is configured to cause the robot to: calculate a first density weight value that corresponds to the user based on at least one from among a distance between the user and the other user and a number of other users comprised within a pre-set range from the user, and calculate the degree of interest of the user based on the first density weight value.
- 8 . The robot of claim 7 , wherein the processor is further configured to calculate a high degree of interest of the user as the first density weight value is greater.
- 9 . The robot of claim 6 , wherein at least one processor, individually and/or collectively, is configured to cause the robot to: analyze the obtained image; and obtain gesture information of a user of the plurality of users based on the analysis of the obtained image.
- 10 . The robot of claim 9 , wherein the obtained gesture information includes a gesture of the user calling the robot.
- 11 . The robot of claim 7 , wherein at least one processor, individually and/or collectively, is configured to cause the robot to: calculate a first degree of interest of the user and a second degree of interest of the other user, wherein a first density weight value of the user is greater than a second density weight value of the other user, wherein a distance from the user to the robot is greater than a distance from the other user to the robot, and wherein the first degree of interest of the user is greater than the second degree of interest of the other user.
- 12 . The robot of claim 4 , wherein at least one processor, individually and/or collectively, is configured to cause the robot to: obtain time information that corresponds to the degree of interest of the respective users and store in the memory, and identify, based on the time information, another user with a degree of interest of greater than or equal to a threshold value within a pre-set range from a time point at which the interaction with the user is ended as an interaction target.
- 13 . The robot of claim 4 , further comprising: a microphone, wherein at least one processor, individually and/or collectively, is configured to cause the robot to: obtain a speech signal by using the microphone, identify a user corresponding to the speech signal from among the plurality of users, and calculate the degree of interest of the identified user based on the speech signal.
- 14 . A method of controlling a robot, the method comprising: performing an interaction with a user; identifying information on another user and a degree of interest of the other user; and ending, based on a degree of interest of the user being less than a threshold value, the interaction with user, and providing a service, same as a service provided to the user, to the other user based on the identified information .
- 15 . The method of claim 14 , further comprising: performing an interaction with the user with a highest degree of interest from among a plurality of users present in vicinity of the robot by using a camera and a depth sensor, obtaining gazing information of the plurality of users by using the camera while performing the interaction with the user, and obtaining distance information of the plurality of users by using the depth sensor, and identifying an engagement level of the user for the interaction by using the gazing information and the distance information of the user from among the plurality of users, and identifying the degree of interest of the other user by using gazing information and distance information of the other user from among the plurality of users.
- 16 . The method of claim 14 , wherein the identifying the information on the other user comprising: matching information on the other user and the degree of interest of the other user for each time interval with time information and store the matched information, and wherein the providing the service comprising: providing the service, same as the service provided to the user, to the other user based on the matched information.
- 17 . The method of claim 14 , further comprising: storing, while performing the interaction with the user, a first map comprising gazing information of the plurality of users, respectively, and a second map comprising distance information of the respective users; and generating a fusion map by fusing the first map and the second map, and storing the fusion map, wherein the performing an interaction with the other user comprises performing the interaction with the other user based on the fusion map.
- 18 . The method of claim 17 , wherein the fusion map for each of respective times is accumulated and stored, and wherein the first map and the second map are deleted from a memory according to time.
- 19 . The method of claim 16 , wherein the obtaining the degree of interest of the plurality of users, respectively, comprises: obtaining an image comprising the plurality of users by using a camera; obtaining gazing information of the plurality of users by analyzing the image; obtaining distance information of the plurality of users by using a depth sensor; obtaining density information of the plurality of users based on the distance information of the plurality of users; and calculating a degree of interest of the plurality of users, respectively, based on the gazing information, the density information, and the distance information of the plurality of users.
- 20 . A non-transitory computer readable recording medium storing computer instructions that cause a robot to perform an operation when executed by a processor of an electronic apparatus, wherein the operation comprises; performing an interaction with a user; identifying information on another user and a degree of interest of the other user; and ending, based on a degree of interest of the user being less than a threshold value, the interaction with user, and providing a service, same as a service provided to the user, to the other user based on the identified information.
Description
CROSS-REFERENCE TO RELATED APPLICATIONS This application is a continuation application of prior Application number 18/188,104, filed on March 22, 2023, which is a continuation application claiming priority under 35 U.S.C. § 365(c), of an International Application No. PCT/KR2021/000221, filed on January 8, 2021, which is based on and claims the benefit of a Korean patent application number 10-2020-0172983, filed on December 11, 2020, in the Korean Intellectual Property Office, the disclosure of which is incorporated by reference herein in its entirety. BACKGROUND 1. Field The disclosure relates to a robot and a control method therefor. More particularly, the disclosure relates to a robot that collects and stores information on users who are in vicinity of the robot even while performing an interaction, and performs the interaction based on information on the users, and a control method therefor. 2. Description of Related Art With developments in electronic technology, robots that perform interaction with users in real life are being actively used. As an example, a service robot guides users with route directions within a specific space such as an airport or a museum, or provides users with information on a corresponding space. If a robot is to perform an interaction with a user, a process of identifying an interaction target is first necessary. A robot of related art calculated, based on a plurality of users being present in vicinity of the robot, scores based on a position, a distance, or the like of respective users, respectively, and selected a user with a highest score as the interaction target from among the plurality of users. However, the robot of the related art did take into consideration information on user density when calculating scores for the plurality of users. For example, even if many users are concentrated forming a crowd, if a specific user who is not included in the corresponding crowd is positioned closely with the robot, the robot selected the corresponding user as the interaction target and more people were not simultaneously provided with service. Accordingly, there is a problem of service usefulness decreasing. In addition, the robot of the related art did not collect information on other users present in vicinity of the robot while performing an interaction with the interaction target. Accordingly, because the robot of the related art moved to an initial position when the interaction with the interaction target is completed, other users positioned in vicinity of the interaction target that desired to receive service from the robot were inconvenienced in having to move to the initial position of the robot. Accordingly, there is a growing need for a robot that can further increase service usefulness and improve user convenience, and a control method therefor. The above information is presented as background information only to assist with an understanding of the disclosure. No determination has been made, and no assertion is made, as to whether any of the above might be applicable as prior art with regard to the disclosure. SUMMARY Aspects of the disclosure are to address at least the above-mentioned problems and/or disadvantages and to provide at least the advantages described below. Accordingly, an aspect of the disclosure is to provide a robot that collects information on a user and performs a more natural interaction with the user. Additional aspects will be set forth in part in the description which follows and, in part, will be apparent from the description, or may be learned by practice of the presented embodiments. In accordance with an aspect of the disclosure, a robot is provided. The robot includes a camera, a depth sensor, a memory, and a processor, and the processor is configured to perform an interaction with a first user with a highest degree of interest from among a plurality of users present in vicinity of the robot by using the camera and the depth sensor, obtain gazing information of the plurality of users by using the camera while performing the interaction with the first user, obtain distance information of the plurality of users by using the depth sensor, and obtain distance information of the plurality of users, determine an engagement level of the first user for the interaction by using gazing information and distance information of the first user from among the plurality of users, determine a degree of interest of another user by using gazing information and distance information of the first user and the another user from among the plurality of users, end, based on the engagement level of the first user being less than a threshold value, the interaction with first user, and perform an interaction with the another user based on the degree of interest of the another user. The processor may be configured to store, while performing the interaction with the first user, a first map including gazing information of the plurality of users, respectively, and a second ma