KR-20260064252-A - METHOD AND EDGE DEVICE FOR DETECTING PROCESS RISKS BASED ON THREE-DIMENSIONAL OBJECT RECOGNITION
Abstract
According to an embodiment of the present application, a process risk detection method based on three-dimensional object recognition of an edge device and an edge device for the same are provided. The method may include the steps of: acquiring at least one two-dimensional image of a process environment including at least one target object captured at different points in time - wherein the two-dimensional image includes information regarding the position and orientation of the camera that captured each one -; extracting the target object from at least one of the two-dimensional images through a first network function and determining whether the target object is defective; generating three-dimensional rendering information of the process environment based on a plurality of the two-dimensional images through a second network function; and generating volume information and location information of the target object determined to be defective based on the three-dimensional rendering information.
Inventors
- 박준영
- 김지혜
Assignees
- 주식회사 유엑스팩토리
Dates
- Publication Date
- 20260507
- Application Date
- 20241031
Claims (8)
- As a process risk detection method based on 3D object recognition of edge devices, A step of acquiring at least one two-dimensional image of a process environment including at least one target object, taken at different points in time - said two-dimensional image includes information about the position and orientation of the camera that took each - ; A step of extracting the target object from at least one of the two-dimensional images through a first network function and determining whether the target object is defective; A step of generating three-dimensional rendering information for the process environment based on a plurality of the above-mentioned two-dimensional images through a second network function; and A method comprising the step of generating volume information and location information of the target object determined to be defective based on the above 3D rendering information.
- In Article 1, The step of determining whether the above-mentioned target object is defective is: A step of extracting at least one bounding box corresponding to the target object from at least one of the two-dimensional images through the first network function - the bounding box includes information regarding the location, size, and prediction class of the bounding box - ; and A method comprising the step of determining whether the target object is defective by comparing a normal object image corresponding to the predicted class of the bounding box with the target object within the bounding box.
- In Article 1, The step of generating three-dimensional rendering information for the above process environment is: A step of generating information regarding 3D coordinates and viewing directions for a plurality of 3D points based on information regarding the position and direction of a camera corresponding to each of the plurality of 2D images; A step of inputting information regarding the above 3D coordinates and the above viewing direction into the above second network function to output the opacity of the above 3D point; and A method comprising the step of synthesizing the opacity of a plurality of the above-mentioned three-dimensional points to generate three-dimensional rendering information corresponding to the process environment.
- In Paragraph 3, A method in which the second network function is composed of a multi-layer perceptron network.
- In Article 1, A method further comprising the step of transmitting a real-time processing signal for the target object determined to be defective based on the volume information and the location information to a working device.
- In Article 1, A method comprising a plurality of the above-mentioned two-dimensional images including high-resolution images captured by a global shutter operation.
- A computer program stored on a recording medium to execute a method according to any one of claims 1 to 6.
- As an edge device that performs process risk detection based on 3D object recognition, At least one processor; and It includes memory for storing a program executable by the above processor, and The processor, by executing the program, acquires at least one two-dimensional image of a process environment including at least one target object captured at different points in time, extracts the target object from at least one of the two-dimensional images through a first network function, determines whether the target object is defective, generates three-dimensional rendering information of the process environment based on a plurality of the two-dimensional images through a second network function, and generates volume information and location information of the target object determined to be defective based on the three-dimensional rendering information. A device comprising information about the position and orientation of the camera that captured each of the above two-dimensional images.
Description
Method and Edge Device for Detecting Process Risks Based on Three-Dimensional Object Recognition The present application relates to a process risk detection method based on three-dimensional object recognition and an edge device for the same. A smart factory is an intelligent plant designed to collect data throughout the entire manufacturing process based on the latest manufacturing automation technologies, enabling real-time monitoring and control. This system aims to maximize not only process efficiency but also safety by integrating various technologies, such as IoT sensors, cameras, and AI-based analytics. With the introduction of smart factories, plants can monitor the status of each process in real time through data; this allows for improved product quality and increased production speed, while also providing advantages in terms of maintenance and risk management. In particular, data at each manufacturing stage supports production optimization, enabling the maintenance of safe and consistent operations while minimizing human intervention. However, existing technologies have several limitations in promptly recognizing and responding to potential hazards that may occur during the process. For example, there are still many technical challenges in enabling robots to autonomously recognize and rapidly respond to defective products or obstacles that arise during the process. While existing systems allow robots to perceive the location or shape of obstacles, they face limitations in accurately determining their volume or structure. This makes immediate response particularly difficult in dynamic environments. For instance, defective products or unexpected obstacles that suddenly appear on a conveyor belt during the production process must be recognized quickly, but existing technologies are limited in their ability to determine the size and shape of such obstacles in real time. These limitations act as a significant constraint on taking efficient and safe measures when robots perform obstacle removal tasks. Meanwhile, existing systems for collecting and analyzing process data within smart factories adopt a method of transmitting large-scale video to a central server for processing. However, this centralized approach is constrained by network bandwidth and causes various problems, such as data transmission delays and reduced processing speeds. Accordingly, there is a growing demand for new technologies capable of rapidly recognizing and responding to potential risks in smart factories. A brief description of each drawing is provided to help to better understand the drawings cited in this application. FIG. 1 is a drawing for explaining a process risk detection system based on three-dimensional object recognition according to an embodiment of the present application. FIG. 2 is a block diagram illustrating the configuration of an edge device that performs process risk detection based on three-dimensional object recognition according to an embodiment of the present application. FIG. 3 is a functional block diagram for explaining the operation of a processor of an edge device according to an embodiment of the present application. FIG. 4 is a flowchart of a process risk detection method based on three-dimensional object recognition according to an embodiment of the present application. FIG. 5 is a flowchart illustrating an example of step S420 of FIG. 4. FIG. 6 is a flowchart illustrating an example of step S430 of FIG. 4. FIG. 7 is a diagram illustrating a three-dimensional object recognition process according to an embodiment of the present application. FIG. 8 is a diagram illustrating, by way of example, the process risk detection based on three-dimensional object recognition and the subsequent processing thereof according to the present application. The technical concept of the present application is subject to various modifications and may have various embodiments, and specific embodiments are illustrated in the drawings and described in detail. However, this is not intended to limit the technical concept of the present application to specific embodiments, and it should be understood that it includes all modifications, equivalents, and substitutions that fall within the scope of the technical concept of the present application. In explaining the technical concept of the present application, detailed descriptions of related prior art are omitted if it is determined that such descriptions may unnecessarily obscure the essence of the present application. The terms used herein are for describing embodiments and are not intended to limit or/or restrict the present application. Singular expressions include plural expressions unless the context clearly indicates otherwise. Additionally, numbers used herein (e.g., First, Second, etc.) are merely identifiers to distinguish one component from another. In this specification, when it is stated that a part is connected to another part, this includes not only cases where the