Search

CN-122023878-A - Cleaning effect detection method and device for robot, computer equipment and storage medium

CN122023878ACN 122023878 ACN122023878 ACN 122023878ACN-122023878-A

Abstract

The application relates to a method, a device, equipment and a storage medium for detecting the cleaning effect of a robot, wherein the method comprises the steps of determining a cleaned subarea at the current moment and acquiring first boundary position information of the cleaned subarea; the method comprises the steps of obtaining an original image at the current moment, converting the original image into a first image, intercepting a target sub-image corresponding to a cleaned sub-area on the first image according to the first boundary position information, detecting the target sub-image, and obtaining a cleaning classification result corresponding to the target sub-image. According to the application, the target sub-image corresponding to the cleaned sub-area is intercepted in the first image, and only the cleaning effect detection is carried out on the target sub-image, so that the number of features to be detected is reduced, the calculation force requirement is reduced, and the low-delay and high-efficiency real-time data processing can be realized on the general central processing unit on the premise of ensuring the detection precision.

Inventors

  • GAN LEI

Assignees

  • 深圳市普渡科技股份有限公司

Dates

Publication Date
20260512
Application Date
20251230

Claims (14)

  1. 1. A cleaning effect detection method of a robot, the method comprising: Determining a cleaned sub-area at the current moment, and acquiring first boundary position information of the cleaned sub-area; acquiring an original image at the current moment, and converting the original image into a first image; Intercepting a target sub-image corresponding to the cleaned sub-region on the first image according to the first boundary position information; And detecting the target sub-image to obtain a cleaning classification result corresponding to the target sub-image.
  2. 2. The method of claim 1, wherein the determining the cleaned sub-area at the current time comprises: Acquiring a cleaning width and a preset detection length of a cleaning mechanism; and determining a region which is backwards adjacent to the cleaning mechanism and has the length-width dimension of the preset detection length and the cleaning width as a cleaned sub-region.
  3. 3. The method of claim 1, wherein the determining the cleaned sub-area at the current time comprises: And taking the area covered by the cleaning path in the preset time window taking the current moment as the end point as the cleaned subarea at the current moment.
  4. 4. A method according to claim 3, wherein the area covered by the cleaning path within the preset time window ending at the current time is taken as the cleaned sub-area at the current time, and comprises: Acquiring a cleaning path point corresponding to a preset time window taking the current moment as an end point, and acquiring a cleaning path section based on the cleaning path point; And acquiring the cleaning width of the cleaning mechanism, and determining a cleaned sub-area at the current moment based on the cleaning path section and the cleaning width.
  5. 5. The method of claim 1, wherein the acquiring the original image at the current time and converting the original image to the first image comprise: acquiring an original image acquired by a rearview camera at the current moment; and converting the original image into a bird's-eye view based on external parameters of the rearview camera, and taking the bird's-eye view as a first image at the current moment.
  6. 6. The method of claim 1, wherein capturing a target sub-image corresponding to a cleaned sub-region on the first image based on the first boundary position information comprises: Determining second boundary position information corresponding to a target rectangular area containing the cleaned sub-area based on the first boundary position information; And according to the second boundary position information, a target sub-image corresponding to the target rectangular area is intercepted on the first image.
  7. 7. The method of claim 6, wherein determining second boundary position information corresponding to a target rectangular region containing the cleaned sub-region based on the first boundary position information comprises: Determining the circumscribed rectangle of the cleaned subarea according to the first boundary position information so as to obtain the target rectangular area; And selecting the coordinate information of the preset boundary point of the target rectangular area, and taking the coordinate information of the preset boundary point as second boundary position information.
  8. 8. The method of claim 1, wherein detecting the target sub-image to obtain a cleaning classification result corresponding to the target sub-image comprises: and inputting the target sub-image into a trained cleaning effect classification model, and outputting a cleaning classification result corresponding to the target sub-image through the cleaning effect classification model, wherein the cleaning classification result comprises dirt or no dirt.
  9. 9. The method of claim 1, wherein detecting the target sub-image to obtain a cleaning classification result corresponding to the target sub-image comprises: filling the target sub-image into a minimum rectangular image containing the target sub-image; And inputting the minimum rectangular image into a trained cleaning effect classification model to obtain a cleaning classification result corresponding to the target sub-image.
  10. 10. The method of claim 1, wherein detecting the target sub-image, after obtaining the cleaning classification result corresponding to the target sub-image, further comprises: acquiring a cleaning classification result corresponding to a target sub-image at each historical moment in a preset time window taking the current moment as an endpoint; Fusing the cleaning classification results corresponding to the target sub-images at the current moment and each historical moment to obtain the final detection result corresponding to the target sub-images at the current moment.
  11. 11. The method of claim 10, wherein fusing the cleaning classification results corresponding to the target sub-image at the current time and each historical time to obtain a final detection result corresponding to the target sub-image at the current time comprises: Counting the number of each class in the cleaning classification results corresponding to the target sub-images at the current moment and each historical moment, and taking the class with the largest number as the final detection result corresponding to the target sub-images at the current moment.
  12. 12. A cleaning effect detection device of a robot, the device comprising: The position acquisition module is used for determining a cleaned sub-region at the current moment and acquiring first boundary position information of the cleaned sub-region; The image conversion module is used for acquiring an original image at the current moment and converting the original image into a first image; the image intercepting module is used for intercepting a target sub-image corresponding to the cleaned sub-area on the first image according to the first boundary position information; and the detection module is used for detecting the target sub-image to obtain a cleaning classification result corresponding to the target sub-image.
  13. 13. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any of claims 1 to 11 when the computer program is executed.
  14. 14. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program executable for implementing the steps of the cleaning effect detection method of a robot according to any one of claims 1 to 11.

Description

Cleaning effect detection method and device for robot, computer equipment and storage medium Technical Field The invention relates to the field of automatic cleaning, in particular to a cleaning effect detection method and device for a robot, computer equipment and a storage medium. Background In the field of automatic cleaning, real-time evaluation of cleaning effect on a working surface is a core link for realizing intelligent closed-loop control. By dynamically sensing the cleaning quality, the cleaning device can autonomously adjust the cleaning strategy to enhance the cleaning effect. In the prior art, real-time detection of smudge effects is often achieved by deep learning based image classification or segmentation techniques. For example, there are methods for identifying a dirty region by deep learning technology, which can learn high-order semantic features of dirty and ground background, and can significantly improve detection accuracy. However, this method generally requires a high computational power support, relies on special computational power units (such as GPU, NPU, DSP, etc.), and is difficult to meet real-time requirements on resource-constrained embedded CPU platforms. Disclosure of Invention The application provides a cleaning effect detection method, a cleaning effect detection device, computer equipment and a storage medium for a robot, which are used for solving the technical problem that the real-time requirement cannot be met under lower calculation support when the cleaning effect is detected in the related technology. In order to achieve the above purpose, the present application adopts the following technical scheme: A cleaning effect detection method of a robot, wherein the method comprises: Determining a cleaned sub-area at the current moment, and acquiring first boundary position information of the cleaned sub-area; acquiring an original image at the current moment, and converting the original image into a first image; Intercepting a target sub-image corresponding to the cleaned sub-region on the first image according to the first boundary position information; And detecting the target sub-image to obtain a cleaning classification result corresponding to the target sub-image. The application also provides a cleaning effect detection device of the robot, wherein the device comprises: The position acquisition module is used for determining a cleaned sub-region at the current moment and acquiring first boundary position information of the cleaned sub-region; The image conversion module is used for acquiring an original image at the current moment and converting the original image into a first image; the image intercepting module is used for intercepting a target sub-image corresponding to the cleaned sub-area on the first image according to the first boundary position information; and the detection module is used for detecting the target sub-image to obtain a cleaning classification result corresponding to the target sub-image. The application also provides a computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, carries out the steps of the method for detecting the cleaning effect of a robot as described above. The present application also provides a computer-readable storage medium storing a computer program executable to implement the steps of the cleaning effect detection method of a robot as described above. The method has the beneficial effects that the method comprises the steps of determining the cleaned subarea at the current moment, acquiring the first boundary position information of the cleaned subarea, acquiring the original image at the current moment, converting the original image into the first image, intercepting the target sub-image corresponding to the cleaned subarea on the first image according to the first boundary position information, and detecting the target sub-image to obtain the cleaning classification result corresponding to the target sub-image. According to the application, the target sub-image corresponding to the cleaned sub-area is intercepted in the first image, and only the cleaning effect detection is carried out on the target sub-image, so that the number of features to be detected is reduced, the calculation force requirement is reduced, and the low-delay and high-efficiency real-time data processing can be realized on the general central processing unit on the premise of ensuring the detection precision. Drawings Fig. 1 is a flow chart of a cleaning effect detection method of a robot in one embodiment. Fig. 2 is a schematic diagram of a cleaned sub-area determination process of a robot of an embodiment. FIG. 3 is a flow chart of an embodiment of a cleaning effect detection method. Fig. 4 is a block diagram of a cleaning effect detection device of a robot in one embodiment. Fig. 5 is a functional block diagram of a device in one em