JP-7857418-B2 - Object recognition device, control device, and object recognition method
Inventors
- 浅田 繁伸
- 藤吉 弘亘
Assignees
- 三菱電機株式会社
- 学校法人中部大学
Dates
- Publication Date
- 20260512
- Application Date
- 20230801
Claims (19)
- An image acquisition unit that acquires images of objects, An image conversion unit that converts the image into a converted image in which the edges of the multiple objects are replaced with edges that have been enhanced to highlight features common to the edges of each of the objects, The system comprises a recognition unit that recognizes the object based on the converted image, The object recognition device is characterized in that the image conversion unit determines the shape obtained by removing random elements that actually exist at each edge of the plurality of objects as a feature common to each edge of the plurality of objects, and converts the image into the converted image by replacing the shape of each object with the shape obtained as the feature.
- The object recognition device according to claim 1, characterized in that the image conversion unit converts the image to the converted image based on the results of learning the image and the converted image.
- The object recognition device according to claim 2, characterized in that the image conversion unit converts the image to the converted image based on image conversion parameters which are the result of learning the image and the converted image.
- The system includes a learning unit that learns the image conversion parameters used for converting the aforementioned image to the converted image, The object recognition device according to claim 3, characterized in that the image conversion unit converts the image into the converted image based on the image conversion parameters which are the result of learning by the learning unit.
- The object recognition device according to claim 4, further comprising an image conversion evaluation unit for evaluating the performance of image conversion using the aforementioned image conversion parameters.
- The object recognition device according to claim 5, characterized in that the learning unit learns the image conversion parameters based on the evaluation results by the image conversion evaluation unit.
- A simulation condition setting unit sets simulation conditions for simulating the imaging of the aforementioned object, which include at least one of imaging equipment information, which is information about the equipment used to capture the image, and object information, which is information about the object. A scene generation unit that simulates and generates a scene in which the object is photographed based on the aforementioned simulation conditions, The object recognition device according to any one of 4 to 6, further comprising: a dataset generation unit that generates a dataset used for learning the image transformation parameters based on the scene generated by the scene generation unit.
- The aforementioned simulation condition setting unit is: A camera equipment information setting unit sets the camera equipment information, which includes at least one of the information regarding the specifications of the equipment and the information regarding the installation of the equipment. The object information setting unit sets the aforementioned object information, The object recognition device according to claim 7, further comprising an environmental information setting unit for setting environmental information which is information about the environment during shooting in the simulation, and for setting the simulation conditions which include the shooting equipment information, the object information and the environmental information.
- The aforementioned dataset generation unit, An image data generation unit generates image data which is at least one of two-dimensional data representing the scene generated by the scene generation unit and three-dimensional data representing the scene generated by the scene generation unit. An annotation data generation unit that generates annotation data to be attached to the aforementioned image data, The object recognition device according to claim 7, further comprising: a target transformation image generation unit that generates a target transformation image which is the target image for image transformation from the aforementioned image data.
- The object recognition device according to claim 9, characterized in that the learning unit learns the image transformation parameters based on the target transformation image and the annotation data.
- An image acquisition unit that acquires images of objects, An image conversion unit that converts the image into a converted image in which the edges of the multiple objects are replaced with edges that have been enhanced to highlight features common to the edges of each of the objects, A recognition unit that recognizes the object based on the converted image, The system comprises a robot control unit that controls the robot body that grasps the object recognized by the recognition unit, The control device is characterized in that the image conversion unit determines the shape obtained by eliminating random elements that actually exist at each edge of the plurality of objects as a feature common to each edge of the plurality of objects, and converts the image into the converted image by replacing the shape of each object with the shape obtained as the feature.
- A simulation condition setting unit sets simulation conditions for simulating the imaging of the aforementioned object, which include at least one of imaging equipment information, which is information about the equipment used to capture the image, and object information, which is information about the object. A scene generation unit that simulates and generates a scene in which the object is photographed based on the aforementioned simulation conditions, The control device according to claim 11, further comprising: a dataset generation unit that generates a dataset used for learning image transformation parameters used for transforming an image to a transformed image, based on the scene generated by the scene generation unit.
- The control device according to claim 12, further comprising a gripping parameter adjustment unit for adjusting gripping parameters, which are parameters of the hand that grips the object within the robot body.
- The gripping parameter adjustment unit is, A gripping parameter adjustment range determination unit that determines the adjustment range for each of the multiple gripping parameters, A gripping parameter changing unit that changes the gripping parameter to be adjusted among a plurality of gripping parameters, A model rotation unit that rotates the model of the aforementioned object, A gripping evaluation unit that evaluates the quality of gripping when the hand grips the object, assuming that the object is made to assume the same posture as the model rotated by the model rotation unit, A grip parameter value determination unit that determines the value of each of the plurality of grip parameters based on the evaluation results from the grip evaluation unit, The control device according to claim 13, comprising a gripping parameter adjustment completion determination unit that determines whether or not adjustment has been completed for all of the plurality of gripping parameters, and adjusting each of the plurality of gripping parameters by determining the value of each of the plurality of gripping parameters.
- The control device according to claim 12, further comprising a recognition parameter adjustment unit for adjusting recognition parameters which are parameters used in the recognition of the object by the recognition unit.
- The recognition parameter adjustment unit is provided to adjust the recognition parameters, which are parameters used in the recognition of the object by the recognition unit. The aforementioned recognition parameter adjustment unit is A recognition parameter adjustment range determination unit that determines the adjustment range for each of the multiple recognition parameters, A recognition parameter changing unit that changes the recognition parameter to be adjusted among a plurality of gripping parameters, A recognition trial unit attempts to perform recognition processing using the gripping parameters adjusted by the gripping parameter adjustment unit, A recognition evaluation unit that evaluates the quality of the results of the aforementioned recognition process, A recognition parameter value determination unit determines the value of each of the plurality of recognition parameters based on the evaluation result by the recognition evaluation unit, The control device according to claim 13 or 14, comprising: a recognition parameter adjustment completion determination unit that determines whether or not adjustment has been completed for all of the plurality of recognition parameters, and adjusting each of the plurality of recognition parameters by determining the value of each of the plurality of recognition parameters.
- The control device according to any one of 12 to 15, further comprising an image conversion evaluation unit that evaluates the performance of image conversion using the aforementioned image conversion parameters based on the operation results of the robot body.
- The control device according to claim 17, characterized in that the operation result includes at least one of the following: information on the probability that the robot body succeeded in grasping the object; information on the operation time during which the robot body performed the operation to grasp the object; and information indicating the cause of the robot body's failure to grasp the object.
- A method for recognizing objects using a computer, The steps include: acquiring an image of the aforementioned object, The steps include transforming the image into a transformed image in which the edges of the multiple objects are replaced with edges that have been enhanced to highlight features common to the edges of each of the multiple objects, The step of recognizing the object based on the converted image includes, An object recognition method characterized in that, in the step of converting the aforementioned image to the converted image, the shape obtained by eliminating random elements that actually exist at each edge of the plurality of objects is determined as a feature common to each edge of the plurality of objects, and the shape of each object is converted to the converted image by replacing it with the shape determined as the feature.
Description
This disclosure relates to an object recognition device, a control device, and an object recognition method for recognizing objects. Industrial robots that pick up objects one by one from a group of objects are known for automating tasks in production sites and other environments. When an industrial robot picks an object, a recognition process is performed to identify the object to be picked, thereby detecting the gripping position when the object is picked up by the industrial robot. Patent Document 1 discloses an information processing device for training a recognition device that performs recognition processing, relating to a recognition process that recognizes an object from an image of the object. The information processing device disclosed in Patent Document 1 acquires training data having characteristics equivalent to the recognition target data by converting an image captured by a camera used when collecting training data so that it has characteristics equivalent to the recognition target data input to the recognition device. Such characteristics are image quality characteristics such as noise, blur, color tone, or white balance. According to the technology of Patent Document 1, even if the characteristics of the camera used when collecting training data and the camera used when acquiring the recognition target data are different, a decrease in recognition accuracy can be prevented. In other words, according to the technology of Patent Document 1, a decrease in recognition accuracy due to individual differences in cameras can be prevented. Japanese Patent Publication No. 2021-82068 A diagram showing the functional configuration of the object recognition device according to Embodiment 1.A flowchart showing the procedure of processing performed by the object recognition device according to Embodiment 1.A diagram showing the functional configuration of the object recognition device according to Embodiment 2.A flowchart showing the procedure of processing performed by the object recognition device according to Embodiment 2.A diagram illustrating an example of an image conversion method using an object recognition device according to Embodiment 2.A diagram showing the functional configuration of the object recognition device according to Embodiment 3.A flowchart showing the procedure of processing performed by the object recognition device according to Embodiment 3.This figure shows an example of the configuration of the simulation condition setting unit of the object recognition device according to Embodiment 3.This figure shows an example of the configuration of the dataset generation unit of the object recognition device according to Embodiment 3.This figure shows the functional configuration of the control device according to Embodiment 4.This figure shows an example of the configuration of the parameter adjustment unit of the control device according to Embodiment 4.This figure shows an example of the configuration of the gripping parameter adjustment unit of the control device according to Embodiment 4.A flowchart showing the procedure of processing performed by the gripping parameter adjustment unit of the control device according to Embodiment 4.This figure shows an example of the configuration of the recognition parameter adjustment unit of the control device according to Embodiment 4.A flowchart showing the procedure of processing performed by the recognition parameter adjustment unit of the control device according to Embodiment 4.This figure shows a first example of a hardware configuration for realizing an object recognition device according to embodiments 1 to 4.This figure shows a second example of a hardware configuration for realizing the object recognition device according to Embodiments 1 to 4. The object recognition device, control device, and object recognition method according to the embodiment will be described in detail below with reference to the drawings. Embodiment 1. Figure 1 shows the functional configuration of an object recognition device 10A according to Embodiment 1. The object recognition device 10A recognizes an object from an image of the object. The object recognition device 10A comprises an image acquisition unit 11, an image conversion unit 12, and a recognition unit 13. The objects recognized by the object recognition device 10A may be objects that are to be grasped by an industrial robot. The industrial robot may grasp the object recognized by the object recognition device 10A from among multiple objects and pick up the object. The industrial robot picks up objects one by one by repeating this operation. Each of the multiple objects may be the same shape as the others, or they may be of different shapes. In the following explanation, each object when each of the multiple objects is the same shape as the others will be referred to as a standard-shaped object. Each object when each of the multiple objects is of different shapes will be referred to as an irregular-sh