Search

CN-116009543-B - Method for getting rid of poverty of robot

CN116009543BCN 116009543 BCN116009543 BCN 116009543BCN-116009543-B

Abstract

The application discloses a method for getting rid of poverty of a robot, and relates to the technical field of intelligent household appliances. The method is applied to a robot, the robot is provided with a laser sensor, the laser sensor is used for detecting the distance of surrounding obstacles of the robot, the method comprises the steps of determining position information of a plurality of effective obstacles based on point cloud information detected by the laser sensor, determining first distances between every two adjacent effective obstacles according to the position information, screening out the first distances exceeding a preset escaping size from the plurality of first distances, marking the two adjacent effective obstacles corresponding to the screened first distances as a designated obstacle combination, and generating escaping paths of the robot according to current position information of the robot and the position information of escaping areas between the two adjacent effective obstacles in the designated obstacle combination. Therefore, the application has the advantages of improving the working efficiency and the intelligence and improving the use experience of users.

Inventors

  • ZHU ZECHUN
  • TIAN DAJI

Assignees

  • 尚科宁家(中国)科技有限公司

Dates

Publication Date
20260505
Application Date
20221229

Claims (9)

  1. 1. A method of getting rid of poverty of a robot, the method being applied to a robot that mounts a laser sensor for detecting a distance of a peripheral obstacle of the robot, the method comprising: determining position information of a plurality of effective obstacles based on the point cloud information detected by the laser sensor; Marking an intermediate area between every two adjacent effective barriers, and judging whether an intermediate included angle corresponding to the intermediate area exceeds a second preset angle threshold value, wherein the intermediate included angle is an included angle formed by two endpoints of the intermediate area at the outermost side relative to the robot and the current position of the robot; Screening out an intermediate region with the intermediate included angle exceeding the second preset angle threshold value as a target intermediate region; Determining a first distance between each two adjacent effective obstacles according to the position information aiming at the two adjacent effective obstacles corresponding to the target middle area; Screening out first distances exceeding a preset escaping size from the first distances, and marking two adjacent effective barriers corresponding to the screened first distances as specified barrier combinations; And generating a getting rid of the trapping path of the robot according to the current position information of the robot and the position information of the getting rid of trapping region between two adjacent effective barriers in the specified barrier combination.
  2. 2. The method of claim 1, wherein determining location information of a plurality of effective obstacles based on the point cloud information detected by the laser sensor comprises: determining position information of each obstacle based on the point cloud information; determining the obstacle size and/or the obstacle boundary angle of each obstacle according to the position information of each obstacle, wherein the obstacle boundary angle is an included angle formed by two endpoints of the obstacle, which are opposite to the outermost side of the robot, and the current position of the robot; And screening out the effective barriers from all the barriers based on the sizes of the barriers and/or the boundary angles of the barriers, and marking the position information of the effective barriers.
  3. 3. The method of claim 2, wherein the screening the effective obstacle from all the obstacles based on the respective obstacle sizes and/or the obstacle boundary angles comprises: screening out the obstacle with the obstacle size exceeding a preset size threshold value and/or the obstacle with the obstacle boundary angle exceeding a first preset angle threshold value as the effective obstacle.
  4. 4. The method of claim 2, wherein determining the obstacle size, and/or the obstacle boundary angle of each of the obstacles based on the position information of each of the obstacles comprises: Determining two end points of each obstacle relative to the outermost side of the robot, wherein the two end point angle values are respectively formed with the current position of the robot in a preset angle coordinate system; judging whether the difference value of the two end point angle values exceeds 180 degrees; If yes, recalculating the obstacle boundary angle based on the difference value of the end point angle values, wherein the obstacle boundary angle is not more than 180 degrees; If not, determining that the obstacle boundary angle is the difference value of the two end point angle values.
  5. 5. The method of claim 2, wherein determining the location information of each obstacle based on the point cloud information comprises: clustering the point cloud information to obtain a plurality of point cloud clusters, wherein one point cloud cluster indicates the position information of one obstacle.
  6. 6. The method of claim 1, wherein determining a first distance between each two adjacent effective obstacles based on the location information comprises: Based on the position information of the effective obstacle, calculating a second distance between adjacent end points of the effective obstacle, which are respectively positioned at two sides of a middle area, or calculating the shortest distance between the effective obstacle, which is respectively positioned at two sides of the middle area, wherein the second distance or the shortest distance is used as a first distance, and the middle area is positioned between two adjacent effective obstacles.
  7. 7. The method of claim 1, wherein screening out a first distance exceeding a preset escape size from the plurality of first distances, and marking two adjacent effective obstacles corresponding to the screened first distance as a specified obstacle combination, comprises: screening the first distances exceeding the preset escape size from the plurality of first distances; marking two adjacent effective barriers corresponding to the first distance as alternative barrier combinations, and judging whether the number of the alternative barrier combinations is less than one; And if the number is not less than one, marking the candidate obstacle combination corresponding to the first distance with the largest value as the designated obstacle combination.
  8. 8. The method of claim 1, further comprising, after the generating the escape path of the robot: After the robot gets rid of the poverty, determining a joint motion path between a path adjustment position and an initial motion path of the robot based on new point cloud information detected by the laser sensor; And controlling the robot to move to the initial movement path along the joint movement path to continue moving after moving to the path adjustment position.
  9. 9. The method of claim 8, wherein determining a joint motion path between the path adjustment position and the initial motion path of the robot based on the new point cloud information detected by the laser sensor comprises: after the robot gets rid of the poverty, determining the real-time position and the movement direction of the robot based on the new point cloud information detected by the laser sensor, wherein the movement direction is far away from the specified obstacle combination; Determining a path adjustment position of the robot after getting rid of poverty based on the real-time position, the movement direction and a preset movement distance; And determining a tangent line between the initial motion path and the path adjustment position by taking the real-time position as a circle center and the path adjustment position as a tangent point, and marking the tangent line as the joint motion path.

Description

Method for getting rid of poverty of robot Technical Field The application relates to the technical field of intelligent household appliances, in particular to a method for getting rid of poverty of a robot. Background In the prior art, a mobile robot acquires surrounding environment information and performs environment map drawing through the distance detection function of a laser sensor, so that data support is provided for real-time positioning and planning navigation. With the rapid development of laser ranging technology, the cost of laser sensors is continually reduced. At present, the laser sensor with lower cost has the common problem of lower measurement precision, and the lower scanning frequency and the lower angular resolution of the laser sensor lead the ranging precision of the robot to decay rapidly along with the increase of the measurement distance. The robot with the low-cost laser sensor is inaccurate in a method of calculating the escape angle based on the measured obstacle distance when the robot is trapped in a narrow area or a large-scale closed area in the actual working process and further directly judging whether the robot can escape according to the escape angle. Disclosure of Invention The application aims to provide a method for getting rid of poverty of a robot, which realizes automatic getting rid of poverty of the robot based on data detected by a laser sensor with lower cost, improves the intelligence and accuracy of the robot in the automatic navigation getting rid of poverty process, and improves the working efficiency of the robot. Embodiments of the present application are implemented as follows: The first aspect of the embodiment of the application provides a method for getting rid of a trapping of a robot, which is applied to the robot, wherein the robot is provided with a laser sensor, the laser sensor is used for detecting the distance of surrounding obstacles of the robot, the method comprises the steps of determining position information of a plurality of effective obstacles based on point cloud information detected by the laser sensor, determining a first distance between every two adjacent effective obstacles according to the position information, screening out the first distance exceeding a preset getting rid of size from the plurality of first distances, marking the two adjacent effective obstacles corresponding to the screened first distance as a specified obstacle combination, and generating a getting rid of the trapping path of the robot according to the current position information of the robot and the position information of a getting rid of the trapping area between the two adjacent effective obstacles in the specified obstacle combination. In one embodiment, the method comprises the steps of determining position information of a plurality of effective obstacles based on point cloud information detected by a laser sensor, determining the position information of each obstacle based on the point cloud information, determining the obstacle size of each obstacle and/or the obstacle boundary angle according to the position information of each obstacle, wherein the obstacle boundary angle is an included angle formed by two end points of the obstacle, which are at the outermost side relative to the robot, and the current position of the robot, and screening out the effective obstacles based on the sizes of the obstacles and/or the obstacle boundary angles, and marking the position information of the effective obstacles. In one embodiment, effective obstacles are selected from all obstacles based on the respective obstacle sizes and/or obstacle boundary angles, including selecting an obstacle having an obstacle size exceeding a predetermined size threshold and/or an obstacle boundary angle exceeding a first predetermined angle threshold as an effective obstacle. In an embodiment, according to the position information of each obstacle, determining the obstacle size of each obstacle and/or the obstacle boundary angle comprises determining two end point angle values formed by each obstacle and the current position of the robot in a preset angle coordinate system respectively relative to two end points on the outermost side of the robot, judging whether the difference value of the two end point angle values exceeds 180 degrees, if so, recalculating the obstacle boundary angle based on the difference value of the end point angle values, wherein the obstacle boundary angle does not exceed 180 degrees, and if not, determining the obstacle boundary angle as the difference value of the two end point angle values. In one embodiment, determining the location information of each obstacle based on the point cloud information includes clustering the point cloud information to obtain a plurality of point cloud clusters, wherein one point cloud cluster indicates the location information of one obstacle. In an embodiment, before determining the first distance between every two ad