Search

EP-4741879-A2 - CLEANING ROBOT AND METHOD FOR PERFORMING TASK THEREOF

EP4741879A2EP 4741879 A2EP4741879 A2EP 4741879A2EP-4741879-A2

Abstract

A method for performing a task of a cleaning robot is provided. The method according to an embodiment includes generating a navigation map for driving the cleaning robot using a result of at least one sensor detecting a task area in which an object is arranged, obtaining recognition information of the object by applying an image of the object captured by at least one camera to a trained artificial intelligence model, generating a semantic map indicating environment of the task area by mapping an area of the object included in the navigation map with the recognition information of the object, and performing a task of the cleaning robot based on a control command of a user using the semantic map. An example of the trained artificial intelligence model may be a deep-learning neural network model in which a plurality of network nodes having weighted values are disposed in different layers and exchange data according to a convolution relationship, but the disclosure is not limited thereto.

Inventors

  • HONG, Soonhyuk
  • GARG, SHIVAM
  • KIM, Eunseo

Assignees

  • Samsung Electronics Co., Ltd.

Dates

Publication Date
20260513
Application Date
20190920

Claims (15)

  1. A cleaning robot comprising: at least one sensor; at least one camera; memory; a driving unit; and at least one processor, wherein the at least one processor is configured to: determine a structure within a home based on information obtained through the at least one sensor or the at least one camera; obtain recognition information regarding an object included in an image obtained by the at least one camera by inputting the obtained image into a trained artificial intelligence model, and obtain recognition information regarding at least one area of the structure within the home based on the obtained recognition information regarding the object; generate a map including recognition information regarding the at least one area and including a position of the object obtained using the at least one sensor or the at least one camera and recognition information regarding the object, wherein the map includes the determined structure within the home; and perform a cleaning operation for one or more areas based on the map.
  2. The cleaning robot of claim 1, wherein the at least one processor is configured to: based on a user voice instructing a cleaning operation for the one or more areas being input, control the driving unit to move to the one or more areas based on the map; and based on moving to the one or more areas, perform a cleaning operation for the one or more areas.
  3. The cleaning robot of claim 2, wherein the user voice includes information regarding a cleaning operation and a name of the one or more areas.
  4. The cleaning robot of any of claims 1 to 3, wherein the at least one processor is configured to: obtain recognition information regarding the at least one area by inputting an image of the at least one area obtained by the at least one camera into a trained artificial intelligence model; and generate the map including a name of the at least one area based on recognition information regarding the at least one area.
  5. The cleaning robot of any of claims 1 to 4, wherein the at least one processor is configured to: recognize a first structure as a first wall and recognize a second structure as a second wall using the at least one sensor and the at least one camera; and detect an empty space in an area in which the first wall and the second wall intersect.
  6. The cleaning robot of any of claims 1 to 5, further comprising: a communication interface, wherein the at least one processor is configured to: control the communication interface to transmit the map to a user terminal; and wherein the user terminal displays a map received from the cleaning robot.
  7. The cleaning robot of claim 6, wherein, based on recognition information regarding a plurality of objects being obtained, an obstacle or a structure determined based on the recognition information is displayed on a map received by the user terminal.
  8. The cleaning robot of claim 7, wherein the at least one processor is configured to: based on information regarding a dangerous object among the plurality of objects being obtained, control the communication interface to transmit the information regarding the dangerous object to the user terminal; and wherein a map included in the user terminal displays information regarding a dangerous object.
  9. The cleaning robot of claim 7 or 8, wherein the at least one processor is configured to: based on a user command for designating at least a part of the at least one area as an avoidance area being received from the user terminal, control the driving unit to travel to remaining areas excluding the avoidance area.
  10. The cleaning robot of any of claims 7 to 9, wherein the user terminal provides a user interface for changing information regarding an object included in the map or information regarding the at least one area; and wherein the user interface includes information regarding a plurality of candidate objects or a plurality of candidate areas that are changeable.
  11. A method of controlling a cleaning robot including at least one sensor and at least one camera, the method comprising: determining a structure within a home based on information obtained through the at least one sensor or the at least one camera; obtaining recognition information regarding an object included in an image obtained by the at least one camera by inputting the obtained image into a trained artificial intelligence model, and obtaining recognition information regarding at least one area of the structure within the home based on the obtained recognition information regarding the object; generating a map including recognition information regarding the at least one area and including a position of the object obtained using the at least one sensor or the at least one camera and recognition information regarding the object, wherein the map includes the determined structure within the home; and performing a cleaning operation for one or more areas based on the map.
  12. The method of claim 11, wherein the performing comprises: based on a user voice instructing a cleaning operation for the one or more areas being input, moving to the one or more areas based on the map; and based on moving to the one or more areas, performing a cleaning operation for the one or more areas.
  13. The method of claim 12, wherein the user voice includes information regarding a cleaning operation and a name of the one or more areas.
  14. The method of any of claims 11 to 13, wherein the obtaining recognition information comprises: obtaining recognition information regarding the at least one area by inputting an image of the at least one area obtained by the at least one camera into a trained artificial intelligence model; and generating the map including a name of the at least one area based on recognition information regarding the at least one area.
  15. The method of any of claims 11 to 14, comprising: recognizing a first structure as a first wall and recognize a second structure as a second wall using the at least one sensor and the at least one camera; and detecting an empty space in an area in which the first wall and the second wall intersect.

Description

[Technical Field] Devices and methods consistent with what is disclosed herein relate to a cleaning robot and a task method thereof, and more particularly, to a cleaning robot for providing an appropriate task using information on objects (e.g., obstacles) near the cleaning robot and a controlling method thereof. [Background Art] With the development of robot technology, robots have been commonly used in homes as well as in a specialized technical field or industry requiring a significant amount of workforces. Specifically, service robots for providing housekeeping services to users, cleaning robots, pet robots, etc. have been widely used. Particularly, in the case of a cleaning robot, it is significantly important to specifically identify information on objects such as foreign substances, structures, obstacles, etc. near the cleaning robot in depth, and perform a task suitable for each object. However, a conventional cleaning robot is limited to obtain detailed information of the object due to the limited combination of sensors. In other words, the conventional cleaning robot has no information about what kind of object it is, but drives to avoid objects in the same pattern solely depending on the detection capability (capability of sensing) of the sensor. Accordingly, it is required to identify an object near the cleaning robot, determine a task suitable for the object that can be performed by the cleaning robot, and drive the cleaning robot or avoid objects more effectively. [Disclosure of Invention] [Solution to Problem] An aspect of the exemplary embodiments relates to providing a cleaning robot for providing a service for performing a task suitable for a peripheral object using a plurality of sensors of the cleaning robot, and a controlling method thereof. According to an exemplary embodiment, there is provided a method for performing a task of a cleaning robot, the method including generating a navigation map for driving the cleaning robot based on receiving sensor data from at least one sensor that detects or senses a task area in which an object is arranged, obtaining recognition information of the object by applying an image of the object captured by at least one camera to a trained artificial intelligence model, generating a semantic map indicating environment of the task area by mapping an area of the object included in the navigation map with the recognition information of the object, and performing a task of the cleaning robot based on a control command of a user using the semantic map. According to an exemplary embodiment, there is provided a cleaning robot including at least one sensor, a camera, and at least one processor configured to generate a navigation map for driving the cleaning robot based on receiving sensor data of the at least one sensor detecting (or sensing) a task area in which an object is arranged, obtain recognition information of the object by applying an image of the object captured by the camera to a trained artificial intelligence model, providing a semantic map indicating environment of the task area by mapping an area of the object included in the navigation map with the recognition information of the object, and perform a task of the cleaning robot based on a control command of a user using the semantic map. According to the above-described various exemplary embodiments, a cleaning robot may provide a service for performing the most suitable task such as removing or avoiding one or more objects considering recognition information, and/or additional information, etc. of an object (e.g., a nearby object). According to an exemplary embodiment, there is provided a method including: receiving, by a cleaning robot, a captured image from a camera or sensor of the cleaning robot, transmitting, by the cleaning robot, the captured image to an external server, obtaining, by the external server, recognition result information by inputting the captured image a trained artificial intelligence model, the recognition result information including information on the object, transmitting, by the server, the recognition result information to the cleaning robot, based on mapping an area corresponding to the object included in a navigation map with the recognition information of the object, generating, by the cleaning robot, a semantic map including information indicating a position of the object in the task area in the navigation map, and performing, by the cleaning robot, a task based on a control command of a user using the semantic map. According to the above-described various exemplary embodiments, a cleaning robot may provide a semantic map indicating environment of a task area. Accordingly, a user may control a task of the cleaning robot by using names, etc. of an object or a place with the provided semantic map, so that usability may be significantly improved. [Brief Description of Drawings] FIG. 1 is a view to explain a method of recognizing and detecting (or sensing) an object o