EP-3774202-B1 - CLEANING ROBOT AND METHOD FOR PERFORMING TASK THEREOF
Inventors
- HONG, Soonhyuk
- GARG, SHIVAM
- KIM, Eunseo
Dates
- Publication Date
- 20260506
- Application Date
- 20190920
Claims (13)
- A method for performing a task of a cleaning robot (100) including at least one first sensor and at least one second sensor of a plurality of sensors, the method comprising: generating a navigation map for driving the cleaning robot based on receiving sensor data from the at least one first sensor (110) of the plurality of sensors that detects a task area in which an object (200) is arranged (S2701); obtaining recognition information of the object (200) by applying an image of the object (200) captured by at least one camera (120) to a trained artificial intelligence model (S2702); generating a semantic map indicating environment information of the task area by mapping an area of the object (200) included in the navigation map with the recognition information of the object (S2703); and performing a task of the cleaning robot based on a control command of a user using the semantic map (S2704), and wherein the method further comprises: detecting the object (200) using the at least one second sensor (110) of the plurality of sensors selected based on the recognition information of the object among the plurality of sensors (110); and setting a priority, by the processor, with respect to the plurality of sensors (110) according to the recognition information of the object; and obtaining the additional information on the object using the result detected by the at least one second sensor (110) according to the priority among the plurality of sensors (110), wherein the plurality of sensors (110) comprises various kinds of sensors.
- The method as claimed in claim 1, further comprising: obtaining recognition information of a place included in the task area using the recognition information of the object (S2802), wherein the generating of the semantic map comprises generating the semantic map indicating the environment of the task area using the recognition information of the place and the recognition information of the object (200).
- The method as claimed in claim 1, wherein the generating of the semantic map comprises generating the semantic map indicating the environment of the task area by mapping the area of the object included in the navigation map with the recognition information of the object, based on at least one of a location of the object (200) or a form of the object (200), according to a detection result of the at least one first sensor (110).
- The method as claimed in claim 1, wherein the obtaining of the recognition information of the object (200) comprises obtaining the recognition information of the object by applying the image of the object captured by the camera (120) to the trained artificial intelligence model provided in a memory of the cleaning robot or an external server.
- The method as claimed in claim 1, further comprising: identifying an object boundary corresponding to the object (200) in the navigation map, wherein the generating of the semantic map comprises generating the semantic map indicating the environment of the task area by mapping an area of the object determined by the object boundary with the recognition information of the object (200).
- The method as claimed in claim 1, wherein the recognition information of the object comprises at least one of a name, a type, or a feature of the object (200).
- The method as claimed in claim 1, wherein the control command of the user is a command for requesting execution of the task with respect to an area relating to a specific object, or a command for requesting execution of the task with respect to a specific place.
- The method as claimed in claim 1, wherein the plurality of sensors (110) includes at least one of an InfraRed (IR) stereo sensor, an ultrasonic sensor, a light detection and ranging (LIDAR) sensor, or a position sensitive diode (PSD) sensor.
- A cleaning robot (100) comprising: a plurality of sensors (110) including at least one first sensor and at least one second sensor; a camera (120); and at least one processor (140) configured to: generate a navigation map for driving the cleaning robot based on receiving sensor data from the at least one first sensor of the plurality of sensors (110), the sensor data including information regarding a result of the at least one first sensor (110) detecting a task area in which an object (200) is arranged, obtain recognition information of the object (200) by applying an image of the object (200) captured by the camera (120) to a trained artificial intelligence model, provide a semantic map indicating environment of the task area by mapping an area of the object (200) included in the navigation map with the recognition information of the object, and perform a task of the cleaning robot based on a control command of a user using the semantic map, wherein the at least one processor (140) is further configured to: detect the object (200) using the at least one second sensor of the plurality of sensors (110) selected based on the recognition information of the object among the plurality of sensors (110); set a priority with respect to the plurality of sensors (110) according to the recognition information of the object; and obtain the additional information on the object using the result detected by the at least one second sensor according to the priority among the plurality of sensors (110), wherein the plurality of sensors (110) comprises various kinds of sensors.
- The cleaning robot as claimed in claim 9, wherein the at least one processor (140) is further configured to: information of the object (200), and generate the semantic map indicating the environment of the task area using the recognition information of the place and the recognition information of the object (200).
- The cleaning robot as claimed in claim 9, wherein the processor (140) is further configured to generate the semantic map indicating the environment of the task area by mapping the area of the object (200) included in the navigation map with the recognition information of the object, based on at least one of a location of the object or a form of the object (200), according to a detection result of the at least one first sensor (110).
- The cleaning robot as claimed in claim 9, wherein the processor (140) is further configured to obtain the recognition information of the object (200) by applying the image of the object captured by the camera (120) to the trained artificial intelligence model provided in a memory of the cleaning robot or an external server.
- The cleaning robot as claimed in claim 9, wherein the processor (140) is further configured to: identify an object boundary corresponding to the object (200) in the navigation map, and generate the semantic map indicating the environment of the task area by mapping an area of the object (200) determined by the object boundary with the recognition information of the object (200).
Description
[Technical Field] Devices and methods consistent with what is disclosed herein relate to a cleaning robot and a task method thereof, and more particularly, to a cleaning robot for providing an appropriate task using information on objects (e.g., obstacles) near the cleaning robot and a controlling method thereof. [Background Art] With the development of robot technology, robots have been commonly used in homes as well as in a specialized technical field or industry requiring a significant amount of workforces. Specifically, service robots for providing housekeeping services to users, cleaning robots, pet robots, etc. have been widely used. Particularly, in the case of a cleaning robot, it is significantly important to specifically identify information on objects such as foreign substances, structures, obstacles, etc. near the cleaning robot in depth, and perform a task suitable for each object. However, a conventional cleaning robot is limited to obtain detailed information of the object due to the limited combination of sensors. In other words, the conventional cleaning robot has no information about what kind of object it is, but drives to avoid objects in the same pattern solely depending on the detection capability (capability of sensing) of the sensor. Accordingly, it is required to identify an object near the cleaning robot, determine a task suitable for the object that can be performed by the cleaning robot, and drive the cleaning robot or avoid objects more effectively. US 2018/200884 A1 discloses robots to inherit knowledge from their predecessors. Cleaning robots are examples to inherit knowledge from their predecessors. [Disclosure of Invention] [Solution to Problem] An aspect of the exemplary embodiments relates to providing a cleaning robot for providing a service for performing a task suitable for a peripheral object using a plurality of sensors of the cleaning robot, and a controlling method thereof. According to an exemplary embodiment, there is provided a method for performing a task of a cleaning robot as defined in claim 1. According to an exemplary embodiment, there is provided a cleaning robot as defined in claim 9. According to the above-described various exemplary embodiments, a cleaning robot may provide a service for performing the most suitable task such as removing or avoiding one or more objects considering recognition information, and/or additional information, etc. of an object (e.g., a nearby object). According to the above-described various exemplary embodiments, a cleaning robot may provide a semantic map indicating environment of a task area. Accordingly, a user may control a task of the cleaning robot by using names, etc. of an object or a place with the provided semantic map, so that usability may be significantly improved. [Brief Description of Drawings] FIG. 1 is a view to explain a method of recognizing and detecting (or sensing) an object of an obstacle of a cleaning robot according to an embodiment of the disclosure;FIG. 2A and FIG. 2B are block diagrams to explain a configuration of a cleaning robot according to an embodiment of the disclosure;FIG. 3 is a detailed block diagram to explain a configuration of a cleaning robot according to an embodiment of the disclosure;FIG. 4A, FIG. 4B, FIG. 4C, and FIG. 4D are views to explain that a cleaning robot obtains additional information on an object based on a detection result through an InfraRed (IR) stereo sensor according to an embodiment of the disclosure;FIG. 5A, FIG. 5B, and FIG. 5C are views to explain that a cleaning robot obtains additional information on an object based on a detection result through a LIDAR sensor;FIG. 6A and FIG. 6B are views to explain that a cleaning robot obtains additional information on an object based on a detection result through an ultrasonic sensor;FIG. 7A, FIG. 7B and FIG. 7C are views illustrating that a cleaning robot recognizes the structure of house;FIG. 8A and 8B are views to explain that a cleaning robot generates a semantic map based on the structure of a house and additional information on an object according to an embodiment of the disclosure;FIG. 9A and FIG. 9B are views to explain that a cleaning robot informs a user of a dangerous material on the floor;FIG. 10 is a view to explain that a cleaning robot designates an area not to be cleaned according to an embodiment of the disclosure;FIG. 11A and FIG. 11B are block diagrams illustrating a training module and a recognition module according to various embodiments of the disclosure;FIG. 12 is a view illustrating an example in which a cleaning robot and a server are operable in association with each other to train and recognize data;FIG. 13 is a flowchart to explain a network system using a recognition model according to an embodiment of the disclosure;FIG. 14 is a flowchart to explain an example in which a cleaning robot provides a search result for a first area using a recognition model according to an embodiment of the disclos