EP-4738249-A1 - PROCESSING DEVICE, ROBOT CONTROL DEVICE, ROBOT SYSTEM, AND PROGRAM
Abstract
A processing device includes a detector. The detector detects a partial surface included in a surface of an object based on surface information and color information. The surface information is surface information about the surface of the object. The surface information is obtained based on distance information indicating a distance to the object. The color information is color information of a color image or a grayscale image including the object and an area adjacent to the object.
Inventors
- SASABE AKIHIRO
Assignees
- Kyocera Corporation
Dates
- Publication Date
- 20260506
- Application Date
- 20240626
Claims (19)
- A processing device, comprising: a detector configured to detect a partial surface included in a surface of an object based on surface information and color information, the surface information being surface information about the surface of the object, the surface information being obtained based on distance information indicating a distance to the object, the color information being color information of a color image or a grayscale image including the object and an area adjacent to the object.
- The processing device according to claim 1, wherein the color information includes first color information of the partial surface and second color information of an area adjacent to the partial surface.
- The processing device according to claim 2, wherein the first color information includes a color tone difference within the partial surface.
- The processing device according to any one of claims 1 to 3, wherein the color information includes a color tone difference of the object.
- The processing device according to any one of claims 1 to 4, wherein the detector includes an estimator configured to obtain the surface information based on the distance information and estimate the partial surface based on the obtained surface information, and a corrector configured to correct, based on the color information, an estimation result obtained by the estimator, to provide the corrected estimation result as a detection result of the partial surface.
- The processing device according to claim 5, wherein the corrector is configured to divide, based on the color information, a region of the object included in the color image or the grayscale image into a plurality of segments having boundaries aligned with edges of the object, and correct the estimation result based on the plurality of segments.
- The processing device according to claim 6, wherein the estimator is configured to generate a mask image of the partial surface as the estimation result, and the corrector is configured to superimpose an outline of each of the plurality of segments on the mask image, and calculate an occupancy rate of a region of the partial surface inside the outline of each of the plurality of segments, and reshape the region of the partial surface based on the occupancy rate for each of the plurality of segments.
- The processing device according to claim 7, wherein the corrector is configured to determine whether a region inside the outline of each of the plurality of segments in the mask image corresponds to the partial surface based on the occupancy rate for the segment.
- The processing device according to any one of claims 1 to 8, wherein the estimator is configured to generate, based on a depth image representing the distance information, a point cloud representing the surface of the object as the surface information, define a plurality of candidate surfaces based on the point cloud, estimate, of the plurality of candidate surfaces, a candidate surface in a same plane as the target surface based on a result of comparison between a first threshold and a distance from each of the plurality of candidate surfaces to a point included in the point cloud, and estimate a plurality of points representing the partial surface included in the point cloud based on a result of comparison between a second threshold and a distance from the candidate surface estimated to be in the same plane as the target surface to a point included in the point cloud.
- The processing device according to claim 9, wherein the second threshold is higher than or equal to the first threshold.
- The processing device according to claim 9 or claim 10, wherein the first threshold and the second threshold are based on different setting criteria.
- The processing device according to claim 11, wherein the first threshold is based on a distribution of errors in a distance represented by the depth image.
- The processing device according to claim 11 or claim 12, wherein the second threshold is based on a maximum error in a distance represented by the depth image.
- The processing device according to any one of claims 1 to 13, further comprising: a determiner configured to determine, based on a detection result of the partial surface obtained by the detector, a suction position on the partial surface to be sucked by a suction portion.
- The processing device according to claim 14, further comprising: a controller configured to control a position of the suction portion based on the suction position determined by the determiner.
- A robot control device, comprising: a controller configured to control a robot based on a detection result of the partial surface obtained by the detector included in the processing device according to any one of claims 1 to 13.
- A robotic system, comprising: the robot control device according to claim 16; and a robot controllable by the controller included in the robot control device.
- A program for causing a computer to function as the processing device according to any one of claims 1 to 15.
- A program for causing a computer to function as the robot control device according to claim 16.
Description
TECHNICAL FIELD The present disclosure relates to estimating a surface of an object. BACKGROUND OF INVENTION Patent Literature 1 describes a technique for detecting a plane. CITATION LIST PATENT LITERATURE Patent Literature 1: Japanese Unexamined Patent Application Publication No. 2014-85940 SUMMARY One or more aspects of the present disclosure are directed to a processing device, a robot control device, a robotic system, and a program. In one embodiment, a processing device includes a detector. The detector detects a partial surface included in a surface of an object based on surface information and color information. The surface information is surface information about the surface of the object. The surface information is obtained based on distance information indicating a distance to the object. The color information is color information of a color image or a grayscale image including the object and an area adjacent to the object. In one embodiment, a robot control device includes a controller that controls a robot based on a detection result of the partial surface obtained by the detector included in the above processing device. In one embodiment, a robotic system includes the above robot control device and a robot. The robot is controllable by the controller included in the robot control device. In one embodiment, a program is a program for causing a computer to function as the above processing device. In one embodiment, a program is a program for causing a computer to function as the above robot control device. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a block diagram of a processing device in an example.FIG. 2 is a block diagram of a processing system in an example.FIG. 3 is a schematic diagram of an example captured image.FIG. 4 is a schematic diagram of an example depth image.FIG. 5 is a flowchart of an example operation performed by the processing device.FIG. 6 is a schematic diagram of an example object mask image.FIG. 7 is a schematic diagram of an example target surface mask image.FIG. 8 is a flowchart of an example operation performed by a corrector.FIG. 9 is a schematic diagram of a captured image divided into multiple segments in an example.FIG. 10 is a schematic diagram illustrating an example operation performed by the corrector.FIG. 11 is a schematic diagram illustrating an example operation performed by the corrector.FIG. 12 is a schematic diagram of an example target surface mask image including a reshaped target surface region.FIG. 13 is a schematic diagram of a robotic system in an example.FIG. 14 is a block diagram of a robot control device in an example.FIG. 15 is a block diagram of a processing device in an example.FIG. 16 is a schematic diagram of an example distance conversion image.FIG. 17 is a schematic diagram illustrating an example suction position to be sucked by a suction portion.FIG. 18 is a schematic diagram illustrating an example suction position to be sucked by the suction portion.FIG. 19 is a block diagram of a robot control device in an example.FIG. 20 is a block diagram of a processing device in an example.FIG. 21 is a schematic diagram of an example captured image.FIG. 22 is a schematic diagram of an example target surface mask image.FIG. 23 is a schematic diagram of an example target surface mask image.FIG. 24 is a schematic diagram illustrating an example operation performed by an estimator.FIG. 25 is a schematic diagram illustrating an example operation performed by the estimator.FIG. 26 is a schematic diagram illustrating an example operation performed by the estimator.FIG. 27 is a schematic diagram of an example target surface mask image.FIG. 28 is a schematic diagram illustrating an example operation performed by the estimator. DESCRIPTION OF EMBODIMENTS FIG. 1 is a block diagram of a processing device 1 in an example. FIG. 2 is a block diagram of a processing system 100 including the processing device 1 in an example. As illustrated in FIG. 2, the processing system 100 includes, for example, the processing device 1 and a sensor device 10. The sensor device 10 is, for example, a three-dimensional (3D) camera. The sensor device 10 can capture an image of, for example, a measurement space 50 in which an object 15 is located. The sensor device 10 can generate, for example, a depth image 11 representing the distance in the measurement space 50 and a captured image 12 including the object 15. The captured image 12 is, for example, a color image. The depth image 11 is generated by, for example, a stereo camera included in the sensor device 10. The captured image 12 is generated by, for example, a color camera included in the sensor device 10. The captured image 12 as a color image may be hereafter referred to as a color image 12. The depth image 11 is also referred to as a range image. The depth image 11 is, for example, a grayscale image. The multiple pixels included in the depth image 11 correspond to the respective multiple measurement points included in the