KR-20260062619-A - A VISION SYSTEM FOR DETERMINING THE POSITION OF AN OBJECT USING MULTIPLE CAPTURED IMAGE SYNTHESIS
Abstract
The present invention relates to a vision system for determining the position of an object through the synthesis of multiple captured images, comprising: a lighting unit positioned above an object to be inspected and sequentially flashing to irradiate light onto the object to be inspected; a camera device positioned above the lighting unit and capturing an image of an object to be inspected located below to acquire a captured image; and a control unit that controls the flashing operation of the lighting unit, synthesizes the captured image acquired through the camera device into a single image during the flashing operation, and identifies the outline of the object to be inspected based on the difference in brightness for the same location.
Inventors
- 이구열
Assignees
- 주식회사 엠비젼
Dates
- Publication Date
- 20260507
- Application Date
- 20241029
Claims (5)
- A lighting unit (110) positioned above the test object and sequentially flashing to irradiate light onto the test object; A camera device (120) positioned above the lighting unit (110) and capturing an image of the subject to inspection located below it to obtain a captured image; and a control unit (130) that controls the flashing operation of the lighting unit (110), and when the flashing operation occurs, synthesizes the captured image obtained through the camera device (120) into a single image to identify the outline of the subject to inspection based on the difference in brightness for the same location. Vision system for determining object location through the synthesis of multiple captured images.
- In paragraph 1, The above lighting unit (110) is, A lighting cover (111) covering the upper part of the above-mentioned test subject; and A pair of lights (112) positioned inside the light cover (111) and positioned so as to be inclined symmetrically with respect to the lens optical axis of the camera device (120); comprising Vision system for determining object location through the synthesis of multiple captured images.
- In paragraph 2, The above pair of lights (112) are sequentially flashing each other under the control of the control unit (130), and The camera device (120) captures the subject twice in succession when the pair of lights (112) flash sequentially, thereby acquiring two captured images in which a difference in brightness occurs for the same location. Vision system for determining object location through the synthesis of multiple captured images.
- In paragraph 3, On the outside of the above camera device (120), A heat dissipation fin with an uneven structure formed protrudingly, Vision system for determining object location through the synthesis of multiple captured images.
- In paragraph 4, The lighting unit (1120), camera device (120), and control unit (130) are formed as a single unit. Vision system for determining object location through the synthesis of multiple captured images.
Description
A vision system for determining the position of an object using multi-captured image synthesis The present invention relates to a vision system for determining the location of an object through the synthesis of multiple captured images. More specifically, the invention relates to a vision system for determining the location of an object through the synthesis of multiple captured images that can detect the accurate size and location of an object by illuminating an object through a side-illumination method and synthesizing captured images obtained through multiple captures. Generally, machine vision systems are utilized in the production process for the precise alignment of products. However, when the alignment mark is located beneath a protective layer (cover) made of various materials such as glass, ceramic, film, or plastic, a problem arises in accurately determining the precise location of the alignment mark. For example, if the upper protective layer of a test object is made of a highly reflective, translucent, or opaque material and the alignment reference point is located beneath the protective layer, the light must pass through the protective layer to photograph the alignment reference point. In this case, because only light of a specific wavelength with a specific slope can pass through the protective layer due to the optical properties of the material of the protective layer, the light is positioned at an angle to illuminate. Looking at Fig. 1, when the alignment reference point is formed as an intaglio as in Fig. 1(a), when the light is incident from a diagonal direction at an angle θ, area a is photographed relatively darkly, and part b is photographed relatively brightly. Conversely, when the alignment reference point is formed in a positive angle as in Fig. 1(b), and the light is incident from a diagonal direction at an angle θ, area a is photographed relatively brightly and area b is photographed relatively darkly. Therefore, it is difficult to find the exact location and size of the alignment reference point, and when shooting through various materials of the protective layer covering the top, it is not possible to detect an image of the accurate alignment reference point, and in this case, a problem arises in that precise object alignment is not achieved. Figure 1 is a diagram showing the concept of a conventional machine vision system for determining the alignment reference point position of a test object. FIG. 2 is a schematic diagram showing the shape of a vision system (100) for determining the position of an object through the synthesis of multiple captured images according to an embodiment of the present invention. FIG. 3 is a cross-sectional view of a vision system (100) for determining the position of an object through the synthesis of multiple captured images according to an embodiment of the present invention. FIG. 4 is a drawing showing a state in which a difference in brightness occurs due to the sequential flashing operation of a pair of lights (112) adjusted to a certain angle according to one embodiment. Hereinafter, specific details for implementing the present invention will be described in detail with reference to the attached drawings. However, in the following description, specific descriptions regarding widely known functions or configurations will be omitted if there is a risk that the gist of the present invention may be unnecessarily obscured. In the attached drawings, identical or corresponding components are given the same reference numerals. Additionally, in the description of the following embodiments, the description of identical or corresponding components may be omitted. However, even if a description of a component is omitted, it is not intended that such component is not included in any embodiment. The advantages and features of the invented embodiments and the methods for achieving them will become clear by referring to the embodiments described below together with the accompanying drawings. However, the present invention is not limited to the embodiments described below but can be implemented in various different forms, and these embodiments are provided merely to make the present invention complete and to fully inform a person skilled in the art of the scope of the invention. The terms used in this specification will be briefly explained, and the invented embodiments will be described in detail. The terms used in this specification have been selected to be as generally used as possible, taking into account their functions in the present invention; however, these terms may vary depending on the intent of those skilled in the relevant field, case law, the emergence of new technologies, etc. Additionally, in specific cases, terms have been arbitrarily selected by the applicant, and in such cases, their meanings will be described in detail in the relevant description of the invention. Therefore, the terms used in this invention should be defined not merely by t