Search

CN-121747080-B - Cleaning robot obstacle recognition method based on visual saliency detection

CN121747080BCN 121747080 BCN121747080 BCN 121747080BCN-121747080-B

Abstract

The invention belongs to the technical field of image processing, and particularly relates to a method for identifying a cleaning robot obstacle based on visual saliency detection, which comprises the steps of acquiring a front-view image of a robot, dividing the front-view image into super-pixel areas, and calculating the relevant length of vertical textures of each area; and carrying out physical conflict verification on the relative length of the vertical texture and the ground perspective scale limit to obtain a perspective mismatch difference value, converting the perspective mismatch difference value into an obstacle significance weight by utilizing a Gaussian attenuation model, and further generating an obstacle avoidance instruction. The invention can distinguish complex ground textures such as carpets, floor gaps and the like from physical barriers with vertical heights, eliminates the influence of line-of-sight variation on detection stability, and improves the robustness and accuracy of cleaning robot barrier identification.

Inventors

  • WANG FAN
  • LU CUNHAO

Assignees

  • 星逻智能科技(苏州)有限公司

Dates

Publication Date
20260508
Application Date
20260227

Claims (6)

  1. 1. A cleaning robot obstacle recognition method based on visual saliency detection, characterized by comprising: The method comprises the steps of acquiring a front view image of a cleaning robot, dividing the front view image into a plurality of super pixel areas, and acquiring the vertical texture related length of each super pixel area according to the gray level difference of all pixels in each super pixel area in the vertical direction, wherein the requirements are satisfied: ; 、 representing the vertical texture correlation length and the number of pixels of the ith super pixel area; Representing the first-order difference of the ith pixel point of the ith super pixel area in the vertical direction; Representing an absolute value function; Representing a minute value; the method comprises the steps of obtaining the installation pitch angle and the calibration constant of a cleaning robot camera, extracting the geometric center ordinate of each super-pixel region, and calculating the ground perspective scale limit of each super-pixel region according to the geometric attenuation rule of optical perspective projection and by combining the geometric center ordinate, wherein the ground perspective scale limit of each super-pixel region is satisfied: ; 、 representing the ground perspective scale limit and the geometric center ordinate of the ith super-pixel area; representing the mounting pitch angle of the camera; Representing a camera calibration constant; a physical roughness reference constant inherent to the carpet material; performing physical conflict verification on the vertical texture related length of each super pixel region and the ground perspective scale limit to obtain a perspective mismatch difference value representing the abnormal degree of the texture, wherein the requirements are satisfied: ; Representing the perspective mismatch difference of the ith super pixel area; Representing a maximum function; Combining with a preset conflict sensitivity factor, converting the perspective mismatch difference value into an obstacle significance weight by using a Gaussian attenuation model, and meeting the following conditions: ; an obstacle significance weight representing an i-th super-pixel region; representing a collision sensitivity factor; The conflict sensitivity factor is a preset value; The method comprises the steps of constructing an obstacle probability distribution map by using the obstacle saliency weights of each super pixel area, dividing a front view image into a binary mask comprising a suspected obstacle area and a background area by using an identification threshold, carrying out morphological filtering and connectivity analysis on the binary mask, acquiring a plurality of obstacles of the front view image and the central coordinates of each obstacle, and generating an obstacle avoidance instruction of the cleaning robot according to the central coordinates.
  2. 2. The cleaning robot obstacle recognition method based on visual saliency detection according to claim 1, wherein dividing the front view image into a plurality of super pixel areas comprises: And dividing the front view image by adopting an SLIC algorithm to obtain a plurality of super-pixel areas of the front view image.
  3. 3. The cleaning robot obstacle recognition method based on visual saliency detection according to claim 1, wherein the acquisition of the obstacle probability distribution map includes: and backfilling the barrier significance weights of the super pixel areas into all pixel points contained in the super pixel areas, and generating a barrier probability distribution map consistent with the front view image resolution.
  4. 4. The cleaning robot obstacle recognition method based on visual saliency detection according to claim 1, wherein dividing the front view image into a binary mask including a suspected obstacle region and a background region using a recognition threshold value, comprises: and marking the area with the pixel value larger than the recognition threshold value in the obstacle probability distribution map as a suspected obstacle area and the area with the pixel value smaller than or equal to the recognition threshold value as a background area to generate a binarization mask.
  5. 5. The cleaning robot obstacle recognition method based on visual saliency detection according to claim 1, wherein acquiring a plurality of obstacles of a forward-looking image and center coordinates of each obstacle comprises: performing morphological open operation on the binarization mask to obtain a binarization mask after morphological processing; and calculating the circumscribed rectangle of each obstacle, and taking the central coordinate of the circumscribed rectangle as the central coordinate of the corresponding obstacle.
  6. 6. The method for identifying obstacles of a cleaning robot based on visual saliency detection according to claim 1, wherein generating an obstacle avoidance instruction of the cleaning robot includes: Setting a safety threshold; The center coordinates of the v-th obstacle are converted into the distance under the robot coordinate system for the v-th obstacle by using the calibration parameters of the camera And azimuth angle ; If the distance is Less than the safety threshold, according to azimuth angle Generating an obstacle avoidance instruction with guidance performance if Indicating that the obstacle is positioned at the left front part to generate a right turn instruction, if Indicating that the obstacle is positioned in the front right direction, generating a left turn instruction, if Indicating that the obstacle is positioned right in front to generate backward or in-situ rotation instruction, if the distance is Greater than or equal to the safety threshold, the current cleaning path is maintained.

Description

Cleaning robot obstacle recognition method based on visual saliency detection Technical Field The invention relates to the technical field of image processing. More particularly, the invention relates to a cleaning robot obstacle recognition method based on visual saliency detection. Background The cleaning robot is used as an important service device in the field of intelligent home, and the realization of the autonomous cleaning capability of the cleaning robot is highly dependent on an environment sensing and path planning system. With the rapid development of computer vision technology, cleaning robots with monocular or multi-camera are becoming the main stream of the market, and they use image processing algorithm to identify and avoid obstacles on the path by collecting environmental images in front of the machine body so as to ensure the continuity and safety of cleaning operation. In an actual home cleaning job scenario, the robot's operating environment is typically unstructured and complex. The floor is not only scattered with slippers, pet toys, power lines and other solid barriers with a certain height, but also various complex plane backgrounds, such as wooden floors or ceramic tile floors with geometric patterns or plush carpets, and the like, are paved. During the course of travel, the cleaning robot needs to be able to quickly and accurately separate the actual obstacle from the complex background in order to make a correct movement decision. However, existing visual obstacle recognition techniques often face significant challenges in handling such complex textured floors. Conventional recognition methods mostly rely on edge detection of images, texture gradients, or simple color differences to determine obstacles. Under this mechanism, the rich pattern on the carpet or the dark floor seams also appear in the image as high frequency gray jumps and dense edge features, which are easily confused with the visual features of physical obstructions. Due to the lack of efficient analysis of the three-dimensional spatial structure of a scene, it is difficult in the prior art to distinguish two-dimensional textures on the ground from three-dimensional objects having a vertical height. In practical application, the robot is easy to generate false obstacle misjudgment, for example, a normal carpet area is mistakenly regarded as an unperforable obstacle area, so that the robot triggers obstacle avoidance action on open ground, and the cleaning efficiency and intelligence of the cleaning robot are affected. Disclosure of Invention In order to solve the technical problems that the cleaning robot is easy to touch by mistake and avoid obstacle action and the cleaning efficiency and intelligence are affected, the invention provides a cleaning robot obstacle recognition method based on visual saliency detection, which comprises the following steps: acquiring a front view image of a cleaning robot, dividing the front view image into a plurality of super pixel areas, and acquiring the vertical texture correlation length of each super pixel area according to the gray level difference of all pixels in each super pixel area in the vertical direction; Acquiring an installation pitch angle and a calibration constant of a cleaning robot camera, and extracting a geometric center ordinate of each super-pixel region; Performing physical conflict verification on the vertical texture related length of each super pixel region and the ground perspective scale limit to obtain perspective mismatch difference values representing the abnormal degree of textures; The method comprises the steps of constructing an obstacle probability distribution map by using the obstacle saliency weights of each super pixel area, dividing a front view image into a binary mask comprising a suspected obstacle area and a background area by using an identification threshold, carrying out morphological filtering and connectivity analysis on the binary mask, acquiring a plurality of obstacles of the front view image and the central coordinates of each obstacle, and generating an obstacle avoidance instruction of the cleaning robot according to the central coordinates. According to the invention, the ground perspective scale limit which dynamically changes along with the viewing distance is constructed, and the physical conflict check is carried out on the length of the actually observed vertical texture and the theoretical limit, so that the patterns of the tiled ground and the upright obstacles can be distinguished from each other physically and essentially. According to the invention, the depth sensor is not relied on, and the obstacle recognition and obstacle avoidance with high robustness can be realized under the complex texture environment only by using a single camera, so that the hardware cost is reduced and the environment adaptability is improved. Preferably, the dividing the front view image into a plurality of super pixel areas includes: a