Search

US-12623343-B2 - Robot and method for controlling thereof

US12623343B2US 12623343 B2US12623343 B2US 12623343B2US-12623343-B2

Abstract

A robot is provided. The robot includes a microphone, a camera, a communication interface including a circuit, a memory storing at least one instruction, and a processor, wherein the processor is configured to acquire a user voice through the microphone, identify a task corresponding to the user voice, determine whether the robot can perform the identified task, and control the communication interface to transmit information on the identified task to an external robot based on the determination result.

Inventors

  • Jiho CHU
  • Jongmyeong KO
  • Sangkyung Lee
  • Koeun CHOI

Assignees

  • SAMSUNG ELECTRONICS CO., LTD.

Dates

Publication Date
20260512
Application Date
20230327
Priority Date
20210825

Claims (15)

  1. 1 . A robot comprising: a microphone; a camera; a communication interface including a circuit; memory, storing at least one instruction; and a processor, wherein the at least one instruction, when executed by the processor, cause the robot to: acquire a user voice through the microphone, identify a task corresponding to the user voice, determine whether the robot is capable of performing the identified task, and control the communication interface to transmit information on the identified task from the robot to an external robot based on the determination result, based on determining that the robot is not capable of performing the identified task, identify location information for a target place based on the user voice and characteristic information of the user, identify an external robot, from among a plurality of external robots that is to be communicatively connected to the robot, the identified external robot being closest to the target place, perform communicative connection with the identified external robot, control the robot to move to the target place based on the location information; and control the communication interface to transmit the information on the identified task and the characteristic information of the user to an external robot located around the target place.
  2. 2 . The robot of claim 1 , wherein the characteristic information of the user comprises at least one of height, weight, age, sex, or hairstyle of the user, and wherein the at least one instruction, when executed by the processor, further cause the robot to: acquire an image photographed of the user through the camera, and input the image into a neural network model trained to acquire characteristic information for objects and acquire the characteristic information of the user.
  3. 3 . The robot of claim 1 , wherein the memory stores location information for a plurality of places included in a map wherein the robot is located, and wherein the at least one instruction, when executed by the processor, further cause the robot to: analyze the user voice and identify at least one place among the plurality of places, calculate scores for the at least one place based on the characteristic information of the user, identify one place having a highest score among the at least one place as the target place, and acquire location information for the target place in the memory.
  4. 4 . The robot of claim 3 , wherein the memory stores a weight corresponding to the characteristic information of the user for calculating the scores for the at least one place, and wherein the at least one instruction, when executed by the processor, further cause the robot to: receive information related to an operation of the user in the target place from the external robot, and update the weight based on the information related to the operation of the user.
  5. 5 . The robot of claim 1 , wherein the at least one instruction, when executed by the processor, further cause the robot to: in order for the external robot to identify the user, control the communication interface to transmit an image, acquired by photographing the user through the camera, to the external robot.
  6. 6 . The robot of claim 1 , further comprising: a light emitting part, wherein the at least one instruction, when executed by the processor, further cause the robot to: in order for the external robot to identify the user, control the light emitting part to irradiate a guide light around the user.
  7. 7 . The robot of claim 1 , wherein the memory stores information for tasks that the robot is capable of performing, and wherein the at least one instruction, when executed by the processor, further cause the robot to: compare a task corresponding to the user voice and the stored information, and determine whether the robot is capable of performing the task corresponding to the user voice.
  8. 8 . A method for controlling a robot, the method comprising: acquiring a user voice; identifying a task corresponding to the user voice; determining whether the robot is capable of performing the identified task; transmitting information on the identified task from the robot to an external robot based on the determination result; based on determining that the robot is not capable of performing the identified task, identifying location information for a target place based on the user voice and characteristic information of the user; identifying an external robot, from among a plurality of external robots that is to be communicatively connected to the robot, the identified external robot being closest to the target place; performing communicative connection with the identified external robot; based on determining that the robot is not capable of performing the identified task, identifying location information for a target place based on the user voice and characteristic information of the user; moving the robot to the target place based on the location information; and transmitting the information on the identified task and the characteristic information of the user to an external robot located around the target place.
  9. 9 . The method of claim 8 , further comprising: acquiring an image by photographing the user; and inputting the image into a neural network model trained to acquire characteristic information for objects and acquiring the characteristic information of the user.
  10. 10 . The method of claim 8 , wherein the robot stores location information for a plurality of places included in a map wherein the robot is located, and wherein the identifying the location information for the target place comprises: analyzing the user voice and identifying at least one place among the plurality of places, calculating scores for the at least one place based on the characteristic information of the user, identifying one place having a highest score among the at least one place as the target place, and acquiring location information for the target place among the location information for the plurality of places.
  11. 11 . The method of claim 10 , wherein the robot stores a weight corresponding to the characteristic information of the user for calculating the scores for the at least one place, and wherein the method further comprises: receiving information related to an operation of the user in the target place from the external robot, and updating the weight based on the information related to the operation of the user.
  12. 12 . The method of claim 8 , further comprising: identifying the external robot, from among a plurality of external robots that is to be communicatively connected to the robot, the identified external robot being closest to a target place; and performing communicative connection with the identified external robot.
  13. 13 . The method of claim 10 , further comprising identifying a category based on the user voice.
  14. 14 . The method of claim 13 , wherein the calculating of the scores for the at least one place comprises calculating scores for each of a plurality of places corresponding to the identified category.
  15. 15 . The method of claim 11 , wherein the received information related to the operation of the user comprises at least one of time that the user stayed in the target place, time spent for the external robot to perform a task corresponding to a user voice, whether the user purchased a product in the target place, an evaluation of satisfaction about a service, or whether the user repeatedly requested a same task.

Description

CROSS-REFERENCE TO RELATED APPLICATION(S) This application is a continuation application, claiming priority under § 365(c), of an International application No. PCT/KR2022/012612, filed on Aug. 24, 2022, which is based on and claims the benefit of a Korean patent application number 10-2021-0112665, filed on Aug. 25, 2021, in the Korean Intellectual Property Office, the disclosure of which is incorporated by reference herein in its entirety. BACKGROUND 1. Field The disclosure relates to a robot and a method for controlling thereof. More particularly, the disclosure relates to a robot which transmits information to an external robot for providing a service corresponding to a user command, and a method for controlling thereof. 2. Description of Related Art Spurred by the development of robot technologies, robots providing services are being actively used nowadays. For example, service robots are providing various services such as serving food in a restaurant, or guiding the way to a user at a mart, etc. The above information is presented as background information only to assist with an understanding of the disclosure. No determination has been made, and no assertion is made, as to whether any of the above might be applicable as prior art with regard to the disclosure. SUMMARY Meanwhile, in an environment wherein several robots exist, tasks that each robot can perform may vary from one another. Here, if a specific robot cannot perform a task corresponding to a user command, the user may feel inconvenience. Aspects of the disclosure are to address at least the above-mentioned problems and/or disadvantages and to provide at least the advantages described below. Accordingly, an aspect of the disclosure is to provide a technology for providing a continuous service to a user even in case a robot cannot perform a task corresponding to a user command. Another aspect of the disclosure is to provide a robot that makes a user provide with a service even in case a task corresponding to a user command cannot be performed. Additional aspects will be set forth in part in the description which follows and, in part, will be apparent from the description, or may be learned by practice of the presented embodiments. Meanwhile, the technical tasks of the disclosure are not limited to the technical tasks mentioned above, and other technical tasks that were not mentioned would be clearly understood by those having ordinary skill in the technical field to which the disclosure belongs from the descriptions below. In accordance with an aspect of the disclosure for resolving the aforementioned technical task, a robot is provided. The robot includes a microphone, a camera, a communication interface including a circuit, a memory storing at least one instruction, and a processor, wherein the processor is configured to acquire a user voice through the microphone, identify a task corresponding to the user voice, determine whether the robot can perform the identified task, and control the communication interface to transmit information on the identified task to an external robot based on the determination result may be provided. The processor may, based on determining that the robot cannot perform the identified task, identify location information for a target place based on the user voice and characteristic information of the user, control the robot to move to the target place based on the location information, and control the communication interface to transmit the information on the identified task and the characteristic information of the user to an external robot located around the target place. The characteristic information of the user may include at least one of the height, the weight, the age, the sex, or the hairstyle of the user, and the processor may acquire an image by photographing the user through the camera, and input the image into a neural network model trained to acquire characteristic information for objects and acquire the characteristic information of the user. The memory may store location information for a plurality of places included in a map wherein the robot is located, and the processor may analyze the user voice and identify at least one place among the plurality of places, calculate scores for the at least one place based on the characteristic information of the user, and identify one place having the highest score among the at least one place as the target place, and acquire location information for the target place in the memory. The memory may store a weight corresponding to the characteristic information of the user for calculating the scores for the at least one place, and the processor may receive information related to an operation of the user in the target place from the external robot, and update the weight based on the information related to the operation of the user. The processor may identify the external robot, which is the closest to the target place, among a plurality of external robots that can be