US-12620130-B2 - Locating method and apparatus for robot, and storage medium
Abstract
A locating method and apparatus for a robot, and a computer-readable storage medium. The locating method includes: determining current possible pose information of the robot according to current ranging data collected by a ranging unit; determining, according to first current image data collected by an image collection unit, first historical image data matching with the first current image data, the first historical image data being collected by the image collection unit at a historical moment; obtaining first historical pose information of the robot at a moment when the first historical image data is collected; and in response to quantity of the current possible pose information being at least two pieces, matching the first historical pose information with each piece of the current possible pose information, and using matched current possible pose information as current target pose information.
Inventors
- Han Luo
- Jingying CAO
- Ronglei TONG
- Lina Cao
- Jun Chen
- Jiayao MA
- Shuangshuang Wang
Assignees
- BEIJING ROBOROCK INNOVATION TECHNOLOGY CO., LTD.
Dates
- Publication Date
- 20260505
- Application Date
- 20210408
- Priority Date
- 20200901
Claims (20)
- 1 . A locating method for a robot, wherein the robot is equipped with an image collection unit and a ranging unit, the method comprising: determining, according to current ranging data collected by the ranging unit and current orientation information of the robot, current possible pose information of the robot; determining, according to first current image data collected by the image collection unit, first historical image data that matches with the first current image data, wherein the first historical image data is collected by the image collection unit at a historical moment; obtaining first historical pose information of the robot at a moment when the first historical image data is collected; and matching, in response to quantity of the current possible pose information being at least two pieces, the first historical pose information with each piece of the current possible pose information, and using matched current possible pose information as current target pose information.
- 2 . The method according to claim 1 , further comprising: determining, according to the current target pose information, a current location of the robot.
- 3 . The method according to claim 1 , further comprising: collecting, by the image collection unit, second current image data when the matched current possible pose information is more than one piece; determining, according to the second current image data, second historical image data that matches with the second current image data, wherein the second historical image data is collected by the image collection unit at a historical moment; obtaining second historical pose information of the robot at a moment when the second historical image data is collected; and performing, according to the second historical pose information, further screening on the more than one piece of matched current possible pose information.
- 4 . The method according to claim 3 , further comprising: performing, by using a hill-climbing algorithm, screening on the more than one piece of matched current possible pose information, until one piece of the current possible pose information is selected as the target pose information.
- 5 . The method according to claim 3 , wherein in response to duration for screening exceeding predetermined duration when performing, according to the second historical pose information, screening on each piece of the current possible pose information, using current possible pose information in the more than one piece of matched current possible pose information, having best match with the second historical pose information, as the current target pose information.
- 6 . The method according to claim 1 , further comprising: checking the current target pose information by: obtaining, according to the current ranging data, a current local environment map, and determining, in response to matching between the current local environment map and a three-dimensional environment map constructed by using the first historical image data and the first historical pose information, that the current target pose information is accurate.
- 7 . The method according to claim 1 , further comprising: collecting the current orientation information of the robot by using a sensor device.
- 8 . The method according to claim 1 , wherein matching between the first current image data and the first historical image data comprises at least one of: similarity between the first current image data and the first historical image data exceeding a first threshold; and both the first current image data and the first historical image data comprising one or more same photographed objects.
- 9 . The method according to claim 1 , wherein obtaining the first historical pose information of the robot according to a mapping relationship between image data and pose information.
- 10 . The method according to claim 1 , wherein similarity between each piece of the matched current possible pose information and the first historical pose information exceeds a second threshold.
- 11 . A robot, equipped with an image collection unit and a ranging unit, comprising: a processor; and a memory configured to store executable instructions for the processor; wherein the processor is configured to: determine, according to current ranging data collected by the ranging unit and current orientation information of the robot, current possible pose information of the robot; determine, according to first current image data collected by the image collection unit, first historical image data that matches with the first current image data, wherein the first historical image data is collected by the image collection unit at a historical moment; obtain first historical pose information of the robot at a moment when the first historical image data is collected; and match, in response to quantity of the current possible pose information being at least two pieces, the first historical pose information with each piece of the current possible pose information, and use matched current possible pose information as current target pose information.
- 12 . The robot according to claim 11 , wherein the processor is further configured to: determine, according to the current target pose information, a current location of the robot.
- 13 . The robot according to claim 11 , wherein the processor is further configured to: collect, by the image collection unit, second current image data when the matched current possible pose information is more than one piece; determine, according to the second current image data, second historical image data that matches with the second current image data, wherein the second historical image data is collected by the image collection unit at a historical moment; obtain second historical pose information of the robot at a moment when the second historical image data is collected; and perform, according to the second historical pose information, further screening on the more than one piece of matched current possible pose information.
- 14 . The robot according to claim 13 , wherein the processor is further configured to: perform, by using a hill-climbing algorithm, screening on the more than one piece of matched current possible pose information, until one piece of the current possible pose information is selected as the target pose information.
- 15 . The robot according to claim 13 , wherein the processor is further configured to: in response to duration for screening exceeding predetermined duration when performing, according to the second historical pose information, screening on each piece of the current possible pose information, use current possible pose information in the more than one piece of matched current possible pose information, having best match with the second historical pose information, as the current target pose information.
- 16 . The robot according to claim 11 , wherein the processor is further configured to: check the current target pose information by: obtain, according to the current ranging data, a current local environment map, and determine, in response to matching between the current local environment map and a three-dimensional environment map constructed by using the first historical image data and the first historical pose information, that the current target pose information is accurate.
- 17 . The robot according to claim 11 , wherein the processor is further configured to: collect the current orientation information of the robot by using a sensor device.
- 18 . The robot according to claim 11 , wherein matching between the first current image data and the first historical image data comprises at least one of: similarity between the first current image data and the first historical image data exceeding a first threshold; and both the first current image data and the first historical image data comprising one or more same photographed objects.
- 19 . The robot according to claim 11 , wherein the processor is further configured to: obtain the first historical pose information of the robot according to a mapping relationship between image data and pose information.
- 20 . A non-transitory computer-readable storage medium having computer instructions stored thereon, which when executed by a processor, cause the processor to be configured to: determine, according to current ranging data collected by the ranging unit and current orientation information of the robot, current possible pose information of the robot; determine, according to first current image data collected by the image collection unit, first historical image data that matches with the first current image data, wherein the first historical image data is collected by the image collection unit at a historical moment; obtain first historical pose information of the robot at a moment when the first historical image data is collected; and match, in response to quantity of the current possible pose information being at least two pieces, the first historical pose information with each piece of the current possible pose information, and use matched current possible pose information as current target pose information.
Description
CROSS-REFERENCE TO RELATED APPLICATIONS The present disclosure is the U.S. national phase application of International Application No. PCT/CN2021/085932 filed on Apr. 8, 2021, which claims priority to Chinese Patent Application No. 202010906553.9, filed on Sep. 1, 2020, the content of which are incorporated herein by reference in their entireties for all purposes. TECHNICAL FIELD The present disclosure relates to the field of robot technology, and in particular, to a locating method and apparatus for a robot, and a storage medium. BACKGROUND With the development of technology, a variety of robots with an autonomous moving function have emerged, such as an automatic cleaning device like an automatic sweeping robot or an automatic mopping robot. The automatic cleaning device can automatically perform a cleaning operation by actively sensing surrounding environment. For example, in the related technology, simultaneous location and mapping (SLAM) is used to construct a map of the environment needing to be cleaned currently, and the cleaning operation is performed according to the map constructed. SUMMARY The present disclosure provides a locating method and apparatus for a robot, and a computer-readable storage medium. According to one aspect of embodiments of the present disclosure, a locating method for a robot is provided; the robot is equipped with an image collection unit and a ranging unit, the method including: determining, according to current ranging data collected by the ranging unit and current orientation information of the robot, current possible pose information of the robot; determining, according to first current image data collected by the image collection unit, first historical image data that matches with the first current image data, wherein the first historical image data is collected by the image collection unit at a historical moment; obtaining first historical pose information of the robot at a moment when the first historical image data is collected; and matching, in response to quantity of the current possible pose information being at least two pieces, the first historical pose information with each piece of the current possible pose information, and using matched current possible pose information as current target pose information. According to another aspect of embodiments of the present disclosure, a locating apparatus for a robot is provided; the robot is equipped with an image collection unit and a ranging unit; the locating apparatus including: an image data determination unit configured to determine, according to current image data collected by the image collection unit, historical image data that matches with the current image data, wherein the historical image data is collected by the image collection unit at a historical moment;a pose obtaining unit configured to obtain historical pose information of the robot at a moment when the historical image data is collected;a pose determination unit configured to determine, according to current ranging data collected by the ranging unit and current orientation information of the robot, current possible pose information of the robot;a determination unit configured to determine quantity of the current possible pose information; anda matching unit configured to: match the historical pose information with each piece of the current possible pose information, and use matched current possible pose information as current target pose information. According to another aspect of embodiments of the present disclosure, a robot is provided, the robot is equipped with an image collection unit and a ranging unit; the robot including: a processor; anda memory configured to store executable instructions for the processor;and the processor executes the executable instructions to implement the method according to any of foregoing embodiments. According to another aspect of embodiments of the present disclosure, a computer-readable storage medium having computer instructions stored thereon is provided, which when executed by a processor, cause the processor to implement the method according to any of foregoing embodiments. It should be understood that the previous general description and the following detailed description are merely exemplary and explanatory, and are not intended to limit the present disclosure. BRIEF DESCRIPTION OF DRAWINGS The accompanying drawings, which are incorporated in and constitute a part of the present specification, illustrate embodiments consistent with the present disclosure and serve, together with the specification, to explain principles of the present disclosure. FIG. 1 is a schematic diagram when a hijacking event occurs on a robot according to an example embodiment. FIG. 2 is a flowchart of a locating method for a robot according to an example embodiment. FIGS. 3A and 3B are a flowchart of a locating method for a robot according to another example embodiment. FIG. 4 is a block diagram of a locating apparatus for a robo