KR-20260063053-A - 3D SHAPE MEASURING APPARATUS IN REAL TIME
Abstract
A real-time 3D shape measuring device is provided that can extract pixel coordinates of an object recognized in an image expressed in an image by deep learning image recognition of an arbitrary space, which is an object captured by two cameras, through an instance segmentation model optimized for the application site, and calculate actual 3D coordinate values of the object using the extracted pixel coordinates. The real-time 3D shape measuring device includes: first and second cameras that capture an arbitrary space in which a spatial coordinate system is set and which are arranged at regular intervals; and a data processing unit that extracts pixel coordinates of an object recognized in an image expressed in an image by deep learning image recognition of the arbitrary space, which is an object captured by the first and second cameras, through an instance segmentation model optimized for the application site, and calculates actual 3D coordinate values of the object using the extracted pixel coordinates.
Inventors
- 이재목
Assignees
- 유징테크주식회사
- 유징솔루션주식회사
Dates
- Publication Date
- 20260507
- Application Date
- 20241030
Claims (5)
- A real-time 3D shape measuring device comprises: first and second cameras arranged at regular intervals and capturing an arbitrary space in which a spatial coordinate system is set; and Based on the shooting information of the first and second cameras, the first and second straight distances from the origin O(0,0,0) in the arbitrary space to the lens coordinates A(x A ,y A ,z A ) of the first camera and the lens coordinates B(x B ,y B ,z B ) of the second camera ( and ) is derived, the focal lengths (δ A and δ B ) of the first and second cameras are set, and the vector distances between the image and the first and second cameras are calculated respectively through mathematical formulas 1 and 2, and [Mathematical Formula 1] Vector distance between the image and the first camera = [Mathematical Formula 2] Vector distance between the image and the second camera = By moving the above spatial coordinate system to the position formed in the image, the pixels within the images of the first and second cameras are photographic coordinates Convert to, and the photo coordinates of the x, y, and z axes Calculate using the following mathematical formulas 3 to 5, and [Mathematical Formula 3] [Mathematical Formula 4] [Mathematical Formula 5] Is It is the relative position of the above spatial coordinate system from, and The photo coordinates of the x, y, and z axes calculated above Based on, the photographic coordinates of the spatial coordinate system in the first and second cameras. and Deriving, Pixel coordinates on the first and second camera images and and using the transformation matrix M, the above first and second camera image coordinates ( , ) is obtained by the following mathematical formulas 6 and 7, and [Mathematical Formula 6] [Mathematical Formula 7] The coordinates of each camera obtained above First and second photo coordinates formed on the camera , If the value is utilized, the first and second straight lines formed in a direction perpendicular to the image plane from the first and second cameras ( ) is derived using the following mathematical formulas 8 and 9, and [Mathematical Formula 8] [Mathematical Formula 9] At the intersection of the corresponding straight line equations, the first and second straight lines ( A real-time 3D shape measuring device for unmanned operation of a mobile device comprising a data processing unit that derives the 3D coordinates C(X, Y, Z) of an actual target point by organizing X, Y, and Z using the fact that ) have the same value.
- In claim 1, in order to establish the positional relationship between the first and second cameras and the image, since the first and second cameras and the image are parallel to each other, the direction from the image to the first and second cameras is each a first normal vector ( ) and the second normal vector( A real-time 3D shape measuring device expressed as ).
- In claim 1, when a constant transformation matrix M is expressed using the characteristics of the identity matrix, it is represented by the following mathematical formula 10, and [Mathematical Formula 10] The above data processing unit Pixel coordinates on the first and second camera images and and using the transformation matrix M, the above first and second camera image coordinates ( , To obtain ), the above transformation matrix A real-time three-dimensional shape measuring device characterized by being obtained using the following mathematical formulas 11 to 14. [Mathematical Formula 11] [Mathematical Formula 12] [Mathematical Formula 13] [Mathematical Formula 14]
- In claim 1, the data processing unit is the first straight line ( ) and the second straight line( A real-time 3D shape measuring device that calculates the 3D coordinates C(X, Y, Z) of a certain point within the set space by finding the intersection point between ).
- A real-time 3D shape measuring device according to claim 1, wherein the data processing unit performs deep learning image recognition of the arbitrary space, which is an object captured by the first and second cameras, through an instance segmentation model optimized for the application site, extracts pixel coordinates of the image-recognized object expressed on the image, and calculates actual 3D coordinate values of the object using the extracted pixel coordinates.
Description
Real-time 3D Shape Measuring Apparatus The present invention relates to a real-time three-dimensional shape measuring device, and more specifically, to a real-time three-dimensional shape measuring device for unmanned operation of a mobile device. Due to their size and irregular shape, measurement and control necessary for the operation of moving or transport equipment and the safety of the surrounding environment are not efficiently carried out for irregular and regular objects stored in industrial warehouses and yards. Existing measurement methods primarily utilize aerial photography, load cells on mobile devices, or visual estimation by operators; however, due to the potential for operational errors, safety accidents, and QA productivity issues, as well as cost, accuracy problems, and difficulties in real-time response associated with these approaches, there is a demand for improvements to replace existing measurement systems. In addition, conventional laser scanning methods have the disadvantage of requiring the satisfaction of measurement conditions (number of coils, surface, etc.), periodic calibration because they measure a specific object (coil) at a fixed location, and require significant operator intervention. FIG. 1 is a block diagram showing the configuration of a real-time three-dimensional shape measuring device according to an embodiment of the present invention. Figure 2 is an example of the application of the real-time three-dimensional shape measuring device shown in Figure 1. FIGS. 3 and 4 are drawings illustrating a method for measuring a three-dimensional shape using a real-time three-dimensional shape measuring device shown in FIG. 1. Hereinafter, in order to explain in detail enough for a person skilled in the art to easily implement the technical concept of the present invention, embodiments of the present invention will be described with reference to the attached drawings. FIG. 1 is a block diagram showing the configuration of a real-time three-dimensional shape measuring device according to an embodiment of the present invention. A real-time three-dimensional shape measuring device for unmanned operation of a mobile device according to an embodiment of the present invention includes a first camera (100), a second camera (200), and a data processing unit (300). The first and second cameras (100, 200) are arranged at a predetermined interval and photograph an arbitrary space in which a spatial coordinate system is set in advance. The data processing unit (300) performs deep learning image recognition on an arbitrary space that is an object captured by the first and second cameras (100 and 200) through an instance segmentation model optimized for the application site, extracts pixel coordinates of the image-recognized object expressed in the image, and calculates the actual 3D coordinate values of the object using the extracted pixel coordinates. The data processing unit (300) sets nine spatial coordinate systems. The distance between spatial coordinate systems is set arbitrarily. The data processing unit (300) coordinate system center pointThe first and second cameras (100 and 200) are focused at (0,0,0). The data processing unit (300)The first and second camera positions at (0,0,0), i.e., the first camera coordinatesand second camera coordinatesMeasure the lens coordinates A(x) of the first camera at the origin O(0,0,0).A,yA,zA) and the lens coordinates B(x of the second cameraB,yB,zBThe first and second straight distances to ) and derives ). The data processing unit (300) has focal lengths (100 and 200) of the first and second cameras (100 and 200). and ) sets. The data processing unit (300) The positional relationship between the first and second cameras (100 and 200) and the camera image is established. Since the first and second cameras (100 and 200) and the camera image are parallel to each other, the direction from the image to the first and second cameras (100 and 200) is a normal vector The direction from the image plane to the first camera (100) is the first normal vector ( ) The direction from the image plane to the second camera (200) is the second normal vector ( It can be expressed as ). Accordingly, the data processing unit (300) has a first and second straight distance from the origin O (0,0,0) to the lens coordinates A (x A , y A , z A ) of the first camera (100) and the lens coordinates B (x B , y B , z B ) of the second camera (200). and ), focal lengths of the first and second cameras (100 and 200) and ), and the first and second normal vectors ( and Using ), the vector distance between the image and the first and second cameras (100 and 200) is calculated based on the following mathematical formulas 1 and 2. [Mathematical Formula 1] Vector distance between image and first camera (100) = [Mathematical Formula 2] Vector distance between the image and the second camera (200) = 6. Use the following formula to translate the spatial coordinate system of 9 points in the