EP-4735967-A1 - ASCERTAINING THE SPATIAL LOCATION OF TARGET OBJECTS FOR AUTONOMOUS ROBOTS
Abstract
The invention relates to a method for navigating an autonomous mobile robot (1) to a target object (8) or for detecting the target object (8) by means of the autonomous mobile robot (1) in a surrounding area (2) within a technical facility using sensors, having the steps of: a) generating a three-dimensional map of the surrounding area (2) by means of the autonomous mobile robot (1), wherein the autonomous mobile robot (1) also ascertains its own position in the surrounding area (2) in the process, b) detecting the surrounding area (2) using image-capturing means (3) of the autonomous mobile robot (1) and transmitting currently detected image information to a remote human operator (6), c) specifying the target object (8) by means of the remote human operator (6) using the image information, d) ascertaining the distance between the target object (8) and the autonomous mobile robot (1) using a time-of-flight method, e) ascertaining the position of the target object (8) in the surrounding area (2) while taking into consideration at least the ascertained distance and the position of the autonomous mobile robot (1) in the surrounding area (2), and f) navigating the autonomous mobile robot (1) to the target object (8) or detecting the target object (8) by means of the autonomous mobile robot (1) using sensors on the basis of the previously ascertained position of the target object (8).
Inventors
- BIERWEILER, THOMAS
- KIRCHER, Yannick
- MUSSLER, Marvin
Assignees
- Siemens Aktiengesellschaft
Dates
- Publication Date
- 20260506
- Application Date
- 20230905
Claims (14)
- 1. A method for navigating an autonomous, mobile robot (1) to a target object (8) or for sensory detection of the target object (8) by the autonomous, mobile robot (1) in an environment (2) within a technical system, comprising: a) creating a three-dimensional map of the environment (2) by the autonomous, mobile robot (1), wherein the autonomous, mobile robot (1) also determines its own position in the environment (2), b) detecting the environment (2) by image capture means (3) of the autonomous, mobile robot (1) and transmitting currently detected image information to a remote human operator (6), c) specifying the target object (8) by the remote human operator (6) with the aid of the image information by directing a laser beam (7) emitted by the autonomous, mobile robot (1) by means of a laser (4) onto the target object (8) based on specifications from the remote human operator (56), d) determining a distance between the target object (8) and the autonomous, mobile robot (1) by means of a runtime method, e) determining a position of the target object (8) in the environment (2) taking into account the determined distance, an orientation of the laser (4) and a position of the autonomous, mobile robot (1) in the environment (2), f) navigation of the autonomous, mobile robot (1) to the target object (8) or sensory detection of the target object (8) by sensor means of the autonomous, mobile robot (1) on the basis of the previously determined position of the target object (8).
- 2. Method for navigating an autonomous, mobile robot (1) to a target object (8) or for sensory detection of the target object (8) by the autonomous, mobile robot ter (1) in an environment (2) within a technical installation, comprising: a) creating a three-dimensional map of the environment (2) by the autonomous robot (1), wherein the autonomous, mobile robot (1) also determines its own position in the environment (2), b) capturing the environment (2) by depth camera means (3) of the autonomous, mobile robot (1) and transmitting currently captured image information to a remote human operator (6), c) specifying the target object (8) by the remote human operator (6) with the aid of the image information, in that the remote human operator (6) informs the autonomous, mobile robot (1) at which position in the image information currently captured by the depth camera means (3) the target object (8) is located, d) determining a distance between the target object (8) and the autonomous, mobile robot (1) by means of a runtime method of the depth camera means (3), e) determining a position of the target object (8) in the environment using Taking into account the determined distance, the specifications of the depth camera means (3) and a position of the autonomous, mobile robot (1) in the environment (2), f) navigation of the autonomous, mobile robot (1) to the target object (8) or sensory detection of the target object (8) by sensor means of the autonomous, mobile robot (1) on the basis of the previously determined position of the target object (8).
- 3. The method according to claim 1 or 2, wherein the autonomous mobile robot (1) creates the three-dimensional map by means of scanning the environment (2) based on emitted radiation and detection of the radiation reflected by the environment (2).
- 4. The method of claim 3, wherein at least a portion of the radiation has a radiation frequency in the visible spectrum.
- 5. Method according to claim 3 or 4, without claim 2, wherein the autonomous mobile robot (1) creates the three-dimensional map by means of a laser scanner method.
- 6. The method according to claim 1 or any one of claims 3 to 5, excluding claim 2, wherein the autonomous mobile robot (1) the orientation of the laser (4) can change independently of the orientation of the entire autonomous mobile robot (1).
- 7. Method according to claim 1 or according to any one of claims 3 to 5, without claim 2, in which the alignment of the laser (4) is changed by the autonomous mobile robot (1) changing its entire orientation.
- 8. Autonomous, mobile robot (1) for use in a technical system, which is designed to - to create a three-dimensional map of an environment (2) of the autonomous mobile robot (1) and to determine its own position in the environment (2); - to capture the environment (2) by image capture means (3) and to transmit currently captured image information to a remote human operator (6); - to receive a target object (8) from the remote human operator (6) for navigation or for sensory detection by directing a laser beam (7) emitted by the autonomous mobile robot (1) by means of a laser (4) onto the target object (8) by the autonomous mobile robot (1) following instructions from the remote human operator (6); - to determine a distance between the target object (8) and itself by means of a time-of-flight method; - to determine a position of the target object (8) in the environment (2) taking into account the determined distance, an orientation of the laser (4) and a position of the autonomous, mobile robot (1) in the environment (2); - to navigate to the target object (8) on the basis of the previously determined position of the target object (8) or to sensorily detect the target object (8) by sensor means of the autonomous, mobile robot (1).
- 9. Autonomous mobile robot (1) which is trained to - to create a three-dimensional map of an environment (2) of the autonomous mobile robot (1) and to determine its own position in the environment (2); - to capture the environment (2) by depth camera means (3) and to transmit currently captured image information to a remote human operator (6); - to receive a target object (8) from the remote human operator (6) for navigation or for sensory detection, in which the remote human operator (6) informs the autonomous mobile robot (1) at which position the target object (8) is located in the image information currently captured by the depth camera means (3); - to determine a distance between the target object (8) and itself using a time-of-flight method: - to determine a position of the target object (8) in the environment (2) taking into account the determined distance, an orientation of the laser (4) and a position of the autonomous, mobile robot (1) in the environment (2); - to navigate to the target object (8) on the basis of the previously determined position of the target object (8) or to sensorily detect the target object (8) by sensor means of the autonomous, mobile robot (1).
- 10. Autonomous mobile robot (1) according to claim 8 or 9, which is designed to create the three-dimensional map by means of scanning the environment (2) based on emitted radiation and detection of the radiation reflected from the environment (2).
- 11. Autonomous mobile robot (1) according to claim 10, wherein at least a portion of the radiation has a radiation frequency in the visible spectrum.
- 12. Autonomous mobile robot (1) according to claim 10 or 11, without claim 9, which is designed to create the three-dimensional map by means of a laser scanner method.
- 13. Autonomous mobile robot (1) according to claim 8 or according to one of claims 10 to 12, without claim 9, which is designed to change the orientation of the laser (4) independently of an orientation of the entire autonomous mobile robot (1).
- 14. Autonomous mobile robot (1) according to claim 8 or any one of claims 10 to 12, excluding claim 9, which is configured to change the orientation of the laser (4) by changing its overall orientation.
Description
Description Determination of the spatial position of target objects for autonomous robot The invention relates to a method for navigating an autonomous robot to a target object in an environment within a technical system. The invention also relates to an autonomous robot for use in a technical system. To use functions such as capturing images of a specific object on an Autonomous Mobile Robot (AMR), the first step is to scan the respective application area or the underlying environment. For this purpose, a 3D map is created from laser scans. During operation, the Autonomous Mobile Robot continuously captures additional environmental scans and compares them with the existing 3D map of the entire environment. This provides the Autonomous Mobile Robot with positioning information about its relative position within the environment. In the final step, specific objects necessary for mission completion must be inserted into the 3D environment. Depending on their location in space, these objects are not always freely accessible or are difficult to reach. It is known that an autonomous mobile robot uses tags to determine the position of objects in space when creating a mission. These tags must be manually attached to the object by a human. This is a complex process. The tag is then scanned by the autonomous mobile robot. This determines the distance to the object. The autonomous mobile robot's own position, its orientation, The position of the tag in the room can be determined by the direction and orientation of its camera, for example. DE 10 2005 014 146 A1 discloses a target object detection system for detecting a target object in a surrounding area of a detector using an identifier provided on the target object. The invention is based on the object of providing an efficient navigation method of an autonomous, mobile robot to a target object, and a correspondingly designed autonomous, mobile robot which can navigate particularly easily to the specific target object. This object is achieved by a method for navigating an autonomous, mobile robot to a target object in an environment within a technical installation according to claim 1 and by a method for navigating an autonomous, mobile robot to a target object in an environment within a technical installation according to claim 2. Furthermore, the object is achieved by an autonomous, mobile robot for use in a technical installation according to claim 8 and by an autonomous, mobile robot for use in a technical installation according to claim 9. Advantageous further developments emerge from the dependent claims. A method according to the invention comprises the following method steps: a) Creating a three-dimensional map of the environment by the autonomous, mobile robot, whereby the autonomous, mobile robot also determines its own position in the environment, b) Capturing the environment by means of capturing means of the autonomous, mobile robot and transmitting currently captured image information to a remote human operator, c) Specifying the target object by the remote operator with the aid of the image information, in that a laser beam emitted by the autonomous mobile robot by means of a laser is directed onto the target object based on instructions from the operator, d) Determining a distance between the target object and the autonomous mobile robot using a time-of-flight method, e) Determining a position of the target object in the environment taking into account the determined distance, an orientation of the laser and a position of the autonomous mobile robot in the environment, f) Navigation of the autonomous mobile robot to the target object or sensory detection of the target object by sensor means of the autonomous mobile robot on the basis of the previously determined position of the target object. An autonomous, mobile robot is defined here as a robot that can move independently within its environment within a technical system. Furthermore, the robot may have devices that enable it to perform actions such as grasping, cutting, stirring, or welding independently. As explained in detail, a key task that such an autonomous, mobile robot (hereinafter referred to as "robot") is to perform is navigation to a specific target object within the technical system. To do this, the robot must know both its own position and the position of the target object within its environment. In a first step, the robot creates a three-dimensional map of the environment in which it is located. The robot determines its own position within the environment. The robot can create the three-dimensional map by scanning the environment based on emitted radiation and detecting the radiation reflected by the environment. For this purpose, the robot has a suitable radiation source that emits radiation into the robot's surroundings. Using corresponding reflections from objects such as machines, pipes, or walls in the environment, which are detected by a detector on the robot, the robot can c