Search

KR-20260066085-A - Driving robot and its control method

KR20260066085AKR 20260066085 AKR20260066085 AKR 20260066085AKR-20260066085-A

Abstract

The present disclosure relates to a driving robot capable of automatically changing from a driving path-based driving method to an intuitive AI-based driving method depending on the driving environment, and a control method thereof. The robot comprises a sensor unit for acquiring surrounding environment information, an input unit for receiving map data, a driving motor unit for moving the robot, and a control unit for determining a driving method based on surrounding environment information and map data and controlling the driving motor unit. The control unit checks the robot's position based on surrounding environment information and map data, determines a driving method based on whether the robot's position is confirmed, and if the determined driving method is an AI-based driving method resulting from a failure to confirm the robot's position, it inputs the direction information of the destination, the distance information to the destination, and the surrounding environment information into a pre-trained neural network model to determine the optimal driving direction and controls the driving motor unit so that the robot moves in the determined driving direction.

Inventors

  • 최원수
  • 황금성
  • 남희진
  • 우성호

Assignees

  • 주식회사 베어로보틱스코리아

Dates

Publication Date
20260512
Application Date
20230919

Claims (15)

  1. A sensor unit for acquiring surrounding environment information; Input section where map data is entered; A drive motor unit that moves the robot; and, It includes a control unit that controls the drive motor unit by determining a driving method based on the surrounding environment information and map data mentioned above, and The above control unit is, A driving robot characterized by confirming the robot's position based on the surrounding environment information and map data, determining a driving method based on whether the robot's position is confirmed, and if the determined driving method is an AI-based driving method resulting from a failure to confirm the robot's position, inputting the direction information of the destination, the distance information to the destination, and the surrounding environment information into a pre-trained neural network model to determine the optimal driving direction, and controlling the driving motor unit so that the robot moves in the determined driving direction.
  2. In Article 1, The above control unit is, A driving robot characterized by acquiring surrounding environment information and map data when determining the robot's position, and matching the map data with the surrounding environment information to determine the robot's current position.
  3. In Article 1, The above control unit is, A driving robot characterized by determining the driving method by determining a driving path-based first driving method if the robot position is successfully verified, and determining an artificial intelligence-based second driving method if the robot position is not verified.
  4. In Article 1, The above control unit is, A driving robot characterized by generating a driving path based on the robot position information, map data, destination position information, and surrounding environment information, and controlling the driving motor unit so that the robot moves along the generated driving path, if the driving method determined above is a driving path-based driving method based on the successful verification of the robot position.
  5. In Article 1, The above control unit is, A driving robot characterized by determining the optimal driving direction based on direction information of the destination and distance information from the destination when determining the optimal driving direction, checking for the presence or absence of an obstacle located in the first driving direction based on surrounding environment information, and changing the first driving direction to a second driving direction if an obstacle exists in the first driving direction.
  6. In Article 5, The above control unit is, A driving robot characterized by, when checking for the presence or absence of an obstacle located in the first driving direction, if an obstacle exists in the first driving direction, checking whether the obstacle is a corridor wall, and if the obstacle is a corridor wall, recognizing the direction along the corridor wall as the destination direction.
  7. In Article 6, The above control unit is, A driving robot characterized by, when checking whether the obstacle is a corridor wall, recognizing the detected straight line as an obstacle if a straight line is detected in the first driving direction based on the surrounding environment information, checking whether the detected straight line is longer than a set length, and recognizing the obstacle as a corridor wall if the detected straight line is longer than the set length.
  8. In Article 1, The above neural network model is, A driving robot characterized by being pre-trained in a reinforcement learning method that determines a driving direction based on direction information of the destination, distance information to the destination, and surrounding environment information, and receives a reward value or a penalty value according to the determined driving direction.
  9. In Article 8, The above neural network model is, A driving robot characterized by being pre-trained in a reinforcement learning method that receives additional noise information from a noise generator, determines a driving direction based on the noise information, direction information of the destination, distance information to the destination, and surrounding environment information, and receives a reward value or a penalty value according to the determined driving direction.
  10. In Article 8, Neural network models, A driving robot characterized by being pre-trained in a reinforcement learning method that additionally receives a driving direction indicator, determines a driving direction based on the driving direction indicator, direction information of the destination, distance information to the destination, and surrounding environment information, and receives a reward value or a penalty value according to the determined driving direction.
  11. In Article 8, Neural network models, A driving robot characterized by being pre-trained using a reinforcement learning method in which, when the above surrounding environment information is input, the above surrounding environment information is pre-processed to be horizontally flipped, the above horizontally flipped surrounding environment information is periodically learned, a driving direction is determined based on the above horizontally flipped surrounding environment information, the direction information of the destination, the distance information to the destination, and the above surrounding environment information, and a reward value or a penalty value is provided according to the determined driving direction.
  12. In Article 1, The above control unit is, It includes a noise generator that generates noise information, and A driving robot characterized by acquiring noise information from the noise generator, inputting the noise information, result value information including a reward or penalty according to reinforcement learning, direction information of the destination, distance information to the destination, and surrounding environment information into the neural network model to determine the driving direction by reinforcement learning the neural network model.
  13. In Article 1, The above control unit is, A noise generator that generates noise information; and, For pre-training the above neural network model for rotation, it includes a driving direction indicator generator that generates driving direction indicators including a straight direction indicator, a right turn direction indicator, and a left turn direction indicator, and A driving robot characterized by acquiring noise information from the noise generator and acquiring a driving direction indicator from the driving direction indicator generator, and then inputting the noise information, the driving direction indicator, result value information including a reward or penalty according to reinforcement learning, the direction information of the destination, the distance information to the destination, and the surrounding environment information into the neural network model to determine the driving direction by reinforcement learning.
  14. In Article 1, The above control unit is, Noise generator that generates noise information; A driving direction indicator generator that generates driving direction indicators including a straight direction indicator, a right turn direction indicator, and a left turn direction indicator for pre-training the above neural network model for rotation; A replay buffer unit that collects and stores surrounding sensor information of the robot used for training the above neural network model; A preprocessing unit that extracts surrounding sensor information from the above-mentioned replay buffer and preprocesses the surrounding sensor information so that it is flipped left and right; and, It includes a learning unit that inputs the surrounding sensor information, which has been processed by left-right reversal, into the neural network model for pre-training, and A driving robot characterized by acquiring noise information from the noise generator, acquiring a driving direction indicator from the driving direction indicator generator, and, when a learning result value is acquired by the learning unit, inputting the learning result value information corresponding to the noise information, the driving direction indicator, the surrounding sensor information processed with left-right inversion, result value information including a reward or penalty according to reinforcement learning, the direction information of the destination, the distance information to the destination, and the surrounding environment information into the neural network model to determine the driving direction by reinforcement learning.
  15. In a control method for a driving robot including a drive motor unit, Step of acquiring surrounding environment information and map data; A step of determining the robot's location based on the surrounding environment information and map data mentioned above; A step of determining a driving method based on whether the above robot position is confirmed; If the driving method determined above is an AI-based driving method resulting from a failure to verify the robot position, the step of determining the optimal driving direction by inputting the direction information of the destination, the distance information to the destination, and the surrounding environment information into a pre-trained neural network model; and A method for controlling a driving robot, characterized by including the step of controlling the driving motor unit so that the driving robot moves in the driving direction determined above.

Description

Driving Robot and Its Control Method The present disclosure relates to a driving robot capable of driving by automatically changing from a driving path-based driving method to an intuitive artificial intelligence-based driving method depending on the driving environment, and a control method thereof. Generally, a robot is a machine that automatically processes or operates a given task based on its own capabilities, and its applications are typically classified into various fields such as industrial, medical, space, and underwater. Recently, there has been an increasing trend of robots capable of communicating or interacting with humans through voice or gestures. These robots may include various types of robots, such as guide robots placed in specific locations to provide users with various information, or home robots installed in homes. For a robot to drive, it must find its location on a map, there must be a path to the destination location on the map, and the robot must be able to control its motors to follow that path. The position of the robot on the map can be determined through a Simultaneous Localization and Mapping (SLAM) function that compares the robot's sensor information, including camera images, lidar, and distances to obstacles around the robot, with map information. In addition, the process of generating a path to a destination is divided into a Global Path, which generates a path from the robot's position on a map to the destination position, and a Local Path, which generates a path to avoid obstacles around the robot. The robot can control its motors through a motion controller to follow the finally generated Local Path. However, there was a problem where the robot would stop for safety reasons until it received assistance from an outsider if it failed to recognize its own location or generate paths such as global and local paths, as it could not determine the route to the destination. In this case, there was the inconvenience of services such as robot delivery and guidance being suspended indefinitely until external assistance was available, or having to move the robot to another location and restart it to re-recognize its position. Therefore, there is a need for the development of driving robots capable of autonomous driving using an intuitive artificial intelligence method, even in driving environments where robot position recognition and path generation failures occur. FIG. 1 is a drawing for explaining a driving robot according to one embodiment of the present disclosure. FIG. 2 is a diagram illustrating a control unit that performs a driving path-based driving method of a driving robot according to one embodiment of the present disclosure. FIG. 3 is a diagram illustrating a control unit that performs an artificial intelligence-based driving method of a driving robot according to one embodiment of the present disclosure. FIGS. 4 and FIGS. 5 are drawings for explaining the conditions for determining the driving method of a driving robot according to one embodiment of the present disclosure. FIG. 6 is a drawing for explaining a hybrid driving method of a driving robot according to one embodiment of the present disclosure. FIGS. 7 to 10 are drawings for explaining the neural network model learning process of a driving robot according to one embodiment of the present disclosure. FIGS. 11 to 13 are drawings for explaining corridor driving of a driving robot according to one embodiment of the present disclosure. FIG. 14 is a flowchart illustrating the control process of a driving robot according to one embodiment of the present disclosure. Hereinafter, embodiments disclosed in this specification will be described in detail with reference to the attached drawings. Identical or similar components, regardless of drawing symbols, are assigned the same reference number, and redundant descriptions thereof will be omitted. The suffixes "module" and "part" used for components in the following description are assigned or used interchangeably solely for the ease of drafting the specification and do not inherently possess distinct meanings or roles. Furthermore, in describing embodiments disclosed in this specification, if it is determined that a detailed description of related prior art could obscure the essence of the embodiments disclosed in this specification, such detailed description will be omitted. Additionally, the attached drawings are intended only to facilitate understanding of the embodiments disclosed in this specification; the technical concept disclosed in this specification is not limited by the attached drawings, and it should be understood that they include all modifications, equivalents, and substitutions that fall within the concept and technical scope of this disclosure. Terms including ordinal numbers, such as first, second, etc., may be used to describe various components, but said components are not limited by said terms. These terms are used solely for the purpose of distingui