Search

US-12625502-B2 - Patrol inspection method, device and computer-readable storage medium

US12625502B2US 12625502 B2US12625502 B2US 12625502B2US-12625502-B2

Abstract

A patrol inspection method includes: determining that an offset exists between a position of a to-be-inspected point and a position of a predetermined inspection point; obtaining a simulated three-dimensional (3D) object model by scanning a surrounding environment of a robot at the to-be-inspected point; comparing the simulated 3D object model and a pre-stored 3D object model to obtain adjustment information, the pre-stored 3D object model being 3D object information obtained by scanning the surrounding environment of the robot at the predetermined inspection point; based on the adjustment information, adjusting a pose of the robot; and capturing a two-dimensional (2D) image of an inspection target by the robot after adjustment.

Inventors

  • Shuting HUANG
  • Fan Yang
  • Zepei FAN

Assignees

  • LENOVO (BEIJING) LIMITED

Dates

Publication Date
20260512
Application Date
20230308
Priority Date
20220329

Claims (18)

  1. 1 . A patrol inspection method, implemented at a robot, comprising: moving to a to-be-inspected point according to a simultaneous location and mapping (SLAM) map, the SLAM map including a predetermined movement track of the robot and position information corresponding to a plurality of predetermined inspection points on the predetermined movement track, the to-be-inspected point corresponding to a predetermined inspection point of the plurality of predetermined inspection points; determining that an offset exists between a position of the to-be-inspected point and a position of the predetermined inspection point; obtaining a simulated three-dimensional (3D) object model by scanning a surrounding environment of the robot at the to-be-inspected point, the scanning the surrounding environment of the robot including scanning at least one marker in the surrounding environment, wherein the simulated 3D object model is obtained based at least in part on the at least one marker; comparing the simulated 3D object model and a pre-stored 3D object model to obtain adjustment information, the pre-stored 3D object model being 3D object information obtained by scanning the surrounding environment of the robot at the predetermined inspection point, the pre-stored 3D object model being obtained by scanning the surrounding environment of the robot at the predetermined inspection point; adjusting a pose of the robot based on the adjustment information, such that the robot moves to an adjusted to-be-inspected point after the adjustment of the pose of the robot; and capturing a first two-dimensional (2D) image of an inspection target by the robot after the adjustment of the pose of the robot.
  2. 2 . The patrol inspection method according to claim 1 , wherein determining that the offset exists between the position of the to-be-inspected point and the position of the predetermined inspection point comprises: after the robot moves to the to-be-inspected point according to the SLAM map, obtaining a photographing pose of the predetermined inspection point corresponding to the to-be-inspected point, and a 2D image model of the inspection target corresponding to the predetermined inspection point; obtaining a second 2D image of the inspection target captured by the robot with the photographing pose at the to-be-inspected point; using the 2D image model to perform an image matching on the obtained second 2D image to obtain a first match result; and when the first match result indicates that an image offset between the 2D image model and the obtained second 2D image satisfies a predetermined threshold, determining that the offset exists between the position of the to-be-inspected point and the position of the predetermined inspection point.
  3. 3 . The patrol inspection method according to claim 2 , further comprising: using the 2D image model to perform the image matching on the first 2D image captured by the robot after the adjustment of the pose of the robot to obtain a second match result; when the second match result indicates that a first image offset between the 2D image model and the obtained first 2D image satisfies the predetermined threshold, determining that the offset exists between the position of the adjusted to-be-inspected point and the position of the predetermined inspection point; and when the second match result indicates that the first image offset between the 2D image model and the obtained first 2D image does not satisfy the predetermined threshold, determining that no offset exists between the position of the adjusted to-be-inspected point and the position of the predetermined inspection point.
  4. 4 . The patrol inspection method according to claim 1 , wherein the pre-stored 3D object model is obtained by: obtaining a 3D image model by scanning the surrounding environment of the robot at each of the plurality of predetermined inspection points on the SLAM map; at each of the plurality of predetermined inspection points, associating coordinates of the respective predetermined inspection point according to the SLAM map with the respective obtained 3D image model to obtain a respective registered 3D image model, such that a plurality of registered 3D image models are obtained for the plurality of predetermined inspection points; selecting a first registered 3D image model for the predetermined inspection point corresponding to the to-be-inspected point from the plurality of registered 3D image models based on the SLAM map; and determining the selected first registered 3D image model as the pre-stored 3D object model.
  5. 5 . The patrol inspection method according to claim 1 , wherein comparing the simulated 3D object model and the pre-stored 3D object model to obtain the adjustment information comprises: comparing the simulated 3D object model and the pre-stored 3D object model to obtain a differential rotation parameter and a horizontal shift parameter; and determining the differential rotation parameter and the horizontal shift parameter as the adjustment information of the robot.
  6. 6 . The patrol inspection method according to claim 1 , further comprising: obtaining a respective third 2D image of a respective inspection target captured by the robot at each of the plurality of predetermined inspection points and a respective photographing pose corresponding to the respective predetermined inspection point; based on the respective third 2D image, obtaining a respective 2D image model corresponding to each of the plurality of predetermined inspection points; and storing the respective 2D image model and the respective corresponding photographing pose along with a relationship thereof in a database.
  7. 7 . A patrol inspection device, implemented at a robot, comprising: a memory storing computer instructions; and a processor coupled to the memory; wherein when being executed by the processor, the computer instructions cause the processor to: cause the robot to move to a to-be-inspected point according to a simultaneous location and mapping (SLAM) map, the SLAM map including a predetermined movement track of the robot and position information corresponding to a plurality of predetermined inspection points on the predetermined movement track, the to-be-inspected point corresponding to a predetermined inspection point of the plurality of predetermined inspection points; determine that an offset exists between a position of the to-be-inspected point and a position of the predetermined inspection point; obtain a simulated three-dimensional (3D) object model by scanning a surrounding environment of the robot at the to-be-inspected point, the scanning the surrounding environment of the robot including scanning at least one marker in the surrounding environment, wherein the simulated 3D object model is obtained based at least in part on the at least one marker; compare the simulated 3D object model and a pre-stored 3D object model to obtain adjustment information, the pre-stored 3D object model being 3D object information obtained by scanning the surrounding environment of the robot at the predetermined inspection point, the pre-stored 3D object model being obtained by scanning the surrounding environment of the robot at the predetermined inspection point; based on the adjustment information, adjust a pose of the robot, such that the robot moves to an adjusted to-be-inspected point after the adjustment of the pose of the robot; and capture a first two-dimensional (2D) image of an inspection target by the robot after the adjustment of the pose of the robot.
  8. 8 . The patrol inspection device according to claim 7 , wherein when determining that the offset exists between the position of the to-be-inspected point and the position of the predetermined inspection point, the processor is further configured to: after the robot moves to the to-be-inspected point according to the SLAM map, obtain a photographing pose of the predetermined inspection point corresponding to the to-be-inspected point, and a 2D image model of the inspection target corresponding to the predetermined inspection point; obtain a second 2D image of the inspection target captured by the robot with the photographing pose at the to-be-inspected point; use the 2D image model to perform an image matching on the obtained second 2D image to obtain a first match result; and when the first match result indicates that an image offset between the 2D image model and the obtained second 2D image satisfies a predetermined threshold, determine that the offset exists between the position of the to-be-inspected point and the position of the predetermined inspection point.
  9. 9 . The patrol inspection device according to claim 8 , wherein the processor is further configured to: use the 2D image model to perform the image matching on the first 2D image captured by the robot after the adjustment of the pose of the robot to obtain a second match result; when the second match result indicates that a first image offset between the 2D image model and the obtained first 2D image satisfies the predetermined threshold, determine that the offset exists between the position of the adjusted to-be-inspected point and the position of the predetermined inspection point; and when the second match result indicates that the first image offset between the 2D image model and the obtained first 2D image does not satisfy the predetermined threshold, determine that no offset exists between the position of the adjusted to-be-inspected point and the position of the predetermined inspection point.
  10. 10 . The patrol inspection device according to claim 7 , wherein the pre-stored 3D object model is obtained by: obtaining a respective 3D image model by scanning the surrounding environment of the robot at each of the plurality of predetermined inspection points on the SLAM map; at each of the plurality of predetermined inspection points, associating coordinates of the respective predetermined inspection point according to the SLAM map with the respective obtained 3D image model to obtain a respective registered 3D image model, such that a plurality of registered 3D image models are obtained for the plurality of predetermined inspection points; selecting a first registered 3D image model for the predetermined inspection point corresponding to the to-be-inspected point from the plurality of registered 3D image models based on the SLAM map; and determining the selected first registered 3D image model as the pre-stored 3D object model.
  11. 11 . The patrol inspection device according to claim 7 , wherein when comparing the simulated 3D object model and the pre-stored 3D object model to obtain the adjustment information, the processor is further configured to: compare the simulated 3D object model and the pre-stored 3D object model to obtain a differential rotation parameter and a horizontal shift parameter; and determine the differential rotation parameter and the horizontal shift parameter as the adjustment information of the robot.
  12. 12 . The patrol inspection device according to claim 7 , wherein the processor is further configured to: obtain a respective third 2D image of a respective inspection target captured by the robot at each of the plurality of predetermined inspection points and a respective photographing pose corresponding to the respective predetermined inspection point; based on the respective third 2D image, obtain a respective 2D image model corresponding to each of the plurality of predetermined inspection points; and store the respective 2D image model and the respective corresponding photographing pose along with a relationship thereof in a database.
  13. 13 . A non-transitory computer-readable storage medium storing computer instructions, when being executed by a processor, the computer instructions causing the processor to: cause a robot to move to a to-be-inspected point according to a simultaneous location and mapping (SLAM) map, the SLAM map including a predetermined movement track of the robot and position information corresponding to a plurality of predetermined inspection points on the predetermined movement track, the to-be-inspected point corresponding to a predetermined inspection point of the plurality of predetermined inspection points; determine that an offset exists between a position of the to-be-inspected point and a position of the predetermined inspection point; obtain a simulated three-dimensional (3D) object model by scanning a surrounding environment of the robot at the to-be-inspected point, the scanning the surrounding environment of the robot including scanning at least one marker in the surrounding environment, wherein the simulated 3D object model is obtained based at least in part on the at least one marker; compare the simulated 3D object model and a pre-stored 3D object model to obtain adjustment information, the pre-stored 3D object model being 3D object information obtained by scanning the surrounding environment of the robot at the predetermined inspection point, the pre-stored 3D object model being obtained by scanning the surrounding environment of the robot at the predetermined inspection point; based on the adjustment information, adjust a pose of the robot, such that the robot moves to an adjusted to-be-inspected point after the adjustment of the pose of the robot; and capture a first two-dimensional (2D) image of an inspection target by the robot after the adjustment of the pose of the robot.
  14. 14 . The non-transitory computer-readable storage medium according to claim 13 , wherein when determining that the offset exists between the position of the to-be-inspected point and the position of the predetermined inspection point, the processor is further configured to: after the robot moves to the to-be-inspected point according to the SLAM map, obtain a photographing pose of the predetermined inspection point corresponding to the to-be-inspected point, and a 2D image model of the inspection target corresponding to the predetermined inspection point; obtain a second 2D image of the inspection target captured by the robot with the photographing pose at the to-be-inspected point; use the 2D image model to perform an image matching on the obtained second 2D image to obtain a first match result; and when the first match result indicates that an image offset between the 2D image model and the obtained second 2D image satisfies a predetermined threshold, determine that the offset exists between the position of the to-be-inspected point and the position of the predetermined inspection point.
  15. 15 . The non-transitory computer-readable storage medium according to claim 14 , wherein the processor is further configured to: use the 2D image model to perform the image matching on the first 2D image captured by the robot after the adjustment of the pose of the robot to obtain a second match result; when the second match result indicates that a first image offset between the 2D image model and the obtained first 2D image satisfies the predetermined threshold, determine that the offset exists between the position of the adjusted to-be-inspected point and the position of the predetermined inspection point; and when the second match result indicates that the first image offset between the 2D image model and the obtained first 2D image does not satisfy the predetermined threshold, determine that no offset exists between the position of the adjusted to-be-inspected point and the position of the predetermined inspection point.
  16. 16 . The non-transitory computer-readable storage medium according to claim 13 , wherein the pre-stored 3D object model is obtained by: obtaining a 3D image model by scanning the surrounding environment of the robot at each of the plurality of predetermined inspection points on the SLAM map; at each of the plurality of predetermined inspection points, associating coordinates of the respective predetermined inspection point according to the SLAM map with the respective obtained 3D image model to obtain a respective registered 3D image model, such that a plurality of registered 3D image models are obtained for the plurality of predetermined inspection points; selecting a first registered 3D image model for the predetermined inspection point corresponding to the to-be-inspected point from the plurality of registered 3D image models based on the SLAM map; and determining the selected first registered 3D image model as the pre-stored 3D object model.
  17. 17 . The non-transitory computer-readable storage medium according to claim 13 , wherein when comparing the simulated 3D object model and the pre-stored 3D object model to obtain the adjustment information, the processor is further configured to: compare the simulated 3D object model and the pre-stored 3D object model to obtain a differential rotation parameter and a horizontal shift parameter; and determine the differential rotation parameter and the horizontal shift parameter as the adjustment information of the robot.
  18. 18 . The non-transitory computer-readable storage medium according to claim 13 , wherein the processor is further configured to: obtain a respective third 2D image of a respective inspection target captured by the robot at each of the plurality of predetermined inspection points and a respective photographing pose corresponding to the respective predetermined inspection point; based on the respective third 2D image, obtain a respective 2D image model corresponding to each of the plurality of predetermined inspection points; and store the respective 2D image model and the respective corresponding photographing pose along with a relationship thereof in a database.

Description

CROSS-REFERENCE TO RELATED APPLICATION This application claims priority to Chinese Patent Application No. 202210319828.8, filed on Mar. 29, 2022, the entire content of which is incorporated herein by reference. TECHNICAL FIELD The present disclosure relates to the technical field of image processing technologies and, more particularly, to a patrol inspection method, a patrol inspection device, and a computer-readable storage medium. BACKGROUND In a patrol inspection task, a simultaneous location and mapping (SLAM) algorithm may be used to determine a location of an inspection robot after an inspection route is formulated. Photos of inspection targets may be taken at predetermined inspection locations to determine whether any of the inspection target appears abnormal. However, due to various factors such as weather and environmental changes, SLAM determined locations may be deviated to prevent the inspection robot from making stops at the predetermined inspection locations, thereby causing the inspection targets to fall outside a shooting range of the inspection robot and resulting in false detection. Existing methods often include improving SLAM positioning accuracy, or adding markers in the inspection region. Due to resource constraints, these methods may not be feasible or have little effect. SUMMARY One aspect of the present disclosure provides a patrol inspection method. The patrol inspection method includes: determining that an offset exists between a position of a to-be-inspected point and a position of a predetermined inspection point; obtaining a simulated three-dimensional (3D) object model by scanning a surrounding environment of a robot at the to-be-inspected point; comparing the simulated 3D object model and a pre-stored 3D object model to obtain adjustment information, the pre-stored 3D object model being 3D object information obtained by scanning the surrounding environment of the robot at the predetermined inspection point; based on the adjustment information, adjusting a pose of the robot; and capturing a two-dimensional (2D) image of an inspection target by the robot after adjustment. Another aspect of the present disclosure provides a patrol inspection device. The patrol inspection device includes a memory storing computer instructions; and a processor coupled to the memory. When being executed by the processor, the computer instructions cause the processor to: determine that an offset exists between a position of a to-be-inspected point and a position of a predetermined inspection point; obtain a simulated three-dimensional (3D) object model by scanning a surrounding environment of a robot at the to-be-inspected point; compare the simulated 3D object model and a pre-stored 3D object model to obtain adjustment information, the pre-stored 3D object model being 3D object information obtained by scanning the surrounding environment of the robot at the predetermined inspection point; based on the adjustment information, adjust a pose of the robot; and capture a two-dimensional (2D) image of an inspection target by the robot after adjustment. Another aspect of the present disclosure provides a computer-readable storage medium storing computer instructions. When being executed by a processor, the computer instructions cause the processor to: determine that an offset exists between a position of a to-be-inspected point and a position of a predetermined inspection point; obtain a simulated three-dimensional (3D) object model by scanning a surrounding environment of a robot at the to-be-inspected point; compare the simulated 3D object model and a pre-stored 3D object model to obtain adjustment information, the pre-stored 3D object model being 3D object information obtained by scanning the surrounding environment of the robot at the predetermined inspection point; based on the adjustment information, adjust a pose of the robot; and capture a two-dimensional (2D) image of an inspection target by the robot after adjustment. BRIEF DESCRIPTION OF THE DRAWINGS To more clearly illustrate the technical solution of the present disclosure, the accompanying drawings used in the description of the disclosed embodiments are briefly described below. The drawings described below are merely some embodiments of the present disclosure. Other drawings may be derived from such drawings by a person with ordinary skill in the art without creative efforts and may be encompassed in the present disclosure. FIG. 1 is a schematic diagram showing an inspection robot taking photos of inspection targets along a predetermined inspection route according to some embodiments of the present disclosure; FIG. 1A is a schematic diagram of an image of an inspection target when SLAM determines robot positions correctly. FIG. 1B is a schematic diagram of an image of another inspection target when SLAM determines robot positions incorrectly. FIG. 2 is a flowchart of an exemplary patrol inspection method according to some embodiments of the