KR-102962132-B1 - Detector for determining the position of at least one object
Abstract
A detector (110) for determining the location of at least one object (112) is proposed. The detector (110) comprises at least one sensor element (130) including a matrix (132) of an optical sensor (134)—each of which includes a light-sensitive area (136), and the sensor element (130) is configured to determine at least one reflection image (142)—and at least one evaluation device (146), wherein the evaluation device (146) is configured to select at least one reflection feature of the reflection image (142) at at least one first image position (148) within the reflection image (142) and is configured to determine at least one longitudinal coordinate z of the selected reflection feature by optimizing at least one blurring function f a , and the evaluation device (146) is configured to determine at least one reference feature within at least one reference image at at least one second image position (154) in the reference image (168) corresponding to the at least one reflection feature, wherein the reference image (168) and the reflection image (142) are in two different spatial configurations It is determined, where, the spatial configuration is different from each other according to the relative spatial characteristics, and the evaluation device (146) is configured to determine the relative spatial characteristics from the longitudinal coordinate z and the first image position (148) and the second image position (154).
Inventors
- 에베르스파히 마이클
- 실렌 피터
- 쉰들러 패트릭
- 센드 로버트
- 레나르츠 크리스티안
- 브루더 인그마르
Assignees
- 트리나미엑스 게엠베하
Dates
- Publication Date
- 20260512
- Application Date
- 20200108
- Priority Date
- 20190109
Claims (20)
- As a detector (110) for determining the position of at least one object (112), At least one sensor element (130) having a matrix (132) of an optical sensor (134) - each of the optical sensors (134) includes a photosensitive area (136), and the sensor element (130) is configured to determine at least one reflected image (142) - and, It includes at least one evaluation device (146), The above evaluation device (146) is, It is configured to select at least one reflection feature of the reflection image (142) at at least one first image position (148) within the reflection image (142), and It is configured to determine at least one longitudinal coordinate z of the selected reflection feature by optimizing at least one blurring function f a , and Configured to determine at least one reference feature within the at least one reference image (168) at at least one second image position (154) in the reference image (168) corresponding to the at least one reflection feature, wherein the reference image (168) and the reflection image (142) are determined in two different spatial configurations, and the spatial configurations differ from each other according to relative spatial characteristics. Configured to determine the relative spatial characteristics from the above longitudinal coordinate z and the first image position (148) and the second image position (154), Detector (110).
- In Article 1, The above longitudinal coordinate z is determined by using a convolution-based algorithm including a depth from defocus algorithm, Detector (110).
- In Article 1, The above blurring function is optimized by changing at least one parameter of the blurring function, Detector (110).
- In Paragraph 3, The reflection image (142) is a blurred image i b , and the evaluation device (146) is configured to reconstruct the longitudinal coordinate z from the blurred image i b and the blurring function f a , Detector (110).
- In Article 4, The above longitudinal coordinate z is to minimize the difference between the blurred image i b , the additional image i' b , and the convolution(*) of the blurring function f a by changing the parameter σ of the blurring function, i.e., Determined by, Detector (110).
- In Article 1, The above at least one blurring function f a is a single function, or a composite function composed of at least one function from the group consisting of Gaussian, sinc function, pillbox function, square function, Lorentzian function, radial function, polynomial, Hermite polynomial, Zernike polynomial, and Legendre polynomial. Detector (110).
- In Article 1, The above relative spatial characteristic is at least one characteristic selected from the group consisting of relative spatial orientation, relative angular position, relative distance, relative displacement, and relative movement, Detector (110).
- In Article 1, It includes at least two sensor elements (130) separated by relative spatial characteristics, At least one first sensor element (150) is configured to record the reference image (168), and at least one second sensor element (152) is configured to record the reflected image (142). Detector (110).
- In Article 1, Configured to record the reflection image (142) and the reference image (168) using the same matrix (132) of the optical sensor (134) at different times, Detector (110).
- In Article 9, The evaluation device (146) is configured to determine at least one scaling factor for the relative spatial characteristics, Detector (110).
- In Article 1, The above evaluation device (146) is, It is configured to determine the displacement of the above reference feature and the above reflection feature, and It is configured to determine at least one of the triangulation longitudinal coordinates z triang of the object using a predefined relationship between the triangulation longitudinal coordinate z triang of the object and the displacement, and It is configured to determine the actual relationship between the longitudinal coordinate z and the displacement by taking into account the relative spatial characteristics determined above, and Configured to adjust the aforementioned predefined relationship according to the aforementioned actual relationship, Detector (110).
- In Article 11, The evaluation device (146) is configured to replace the predefined relationship with the actual relationship, or is configured to determine a moving average and replace the predefined relationship with the moving average. Detector (110).
- In Article 11, The above evaluation device (146) is, It is configured to determine the difference between the above longitudinal coordinate z and the above triangulation longitudinal coordinate z triang , and Comparing the determined difference with at least one threshold, and if the determined difference is greater than or equal to the threshold, configured to adjust the predefined relationship, Detector (110).
- In Article 11, The evaluation device (146) is configured to determine an estimate of a corrected relative spatial relationship using a mathematical model comprising parameters including various sensor signals, the spatial position of the sensor element, the image position, system characteristics, displacement d on the sensor, the focal length f of the optical transmission device, temperature, z triang , a reference line b, an angle β between the light source and the reference line, and a longitudinal coordinate z, wherein the mathematical model includes a Kalman filter, a linear second-order estimator, a Kalman-Bucy filter, a Stratonovich-Kalman-Bucy filter, a Kalman-Bucy-Stratonovich filter, a minimum variance estimator, a Bayesian estimator, a Best Linear Unbiased Estimator (BLUE), an invariant estimator, and a Wiener At least one mathematical model selected from a group consisting of filters (Wiener filters), etc. Detector (110).
- In Article 1, The above evaluation device (146) is, It is configured to determine at least one longitudinal region—said to be given by the longitudinal coordinate z and the error interval ±ε—and It is configured to determine at least one displacement region within the reference image (168) corresponding to the above longitudinal region, and The above reference image (168) is configured to determine the conjugate line, wherein the displacement region extends along the conjugate line, and Configured to determine the reference feature along the conjugate line corresponding to the longitudinal coordinate z and to determine the range of the displacement area along the conjugate line corresponding to the error interval ±ε, Detector (110).
- In Article 15, The above evaluation device (146) is, A step of determining a displacement region for the second image position (154) of each reflection feature, and A step of assigning the conjugate line to the displacement area of each reflection feature by performing at least one of assigning the conjugate line closest to the displacement area, assigning it within the displacement area, and assigning it closest to the displacement area along a direction orthogonal to the conjugate line; and A method configured to perform at least one step of assigning and determining at least one reference feature for each reflection feature by performing at least one of assigning the reference feature closest to the assigned displacement area, assigning it within the assigned displacement area, assigning it closest to the assigned displacement area along the assigned conjugate line, and assigning it within the assigned displacement area along the assigned conjugate line. Detector (110).
- In Article 15, The evaluation device (146) is configured to match the reflection feature within the displacement region with the at least one reference feature. Detector (110).
- As a detector system (116) for determining the location of at least one object (112), It comprises at least one detector (110) according to any one of claims 1 to 17, and at least one beacon device (118) configured to direct at least one light beam toward the detector (110), wherein The above beacon device (118) is capable of being mounted on the object (112), possessed by the object (112), or integrated into the object (112). Detector system (116).
- As a human-machine interface (120) for exchanging at least one information item between a user (113) and a machine, It comprises at least one detector system (116) according to claim 18, and is designed to determine at least one location of the user (113) by the detector system (116), and is also designed to assign at least one information item to said location, The above at least one beacon device (118) is composed of at least one of being mounted directly or indirectly on the user (113) or being carried by the user (113). Human-machine interface (120).
- As an entertainment device (122) for performing at least one entertainment function, It includes at least one human-machine interface (120) according to claim 19, and The human-machine interface (120) is designed to allow a player to input at least one information item, and is designed to change the entertainment function according to the information. Entertainment device (122).
Description
Detector for determining the position of at least one object The present invention relates to a detector for determining the position of at least one object, a method for determining a relative spatial constellation using at least one detector for determining the position of at least one object, and a method for calibrating at least one detector. The present invention also relates to various uses of a human-machine interface, entertainment device, tracking system, camera, scanning system, and detector device for exchanging at least one information item between a user and a machine. The device, method, and use according to the present invention may be specifically used in various fields of science, such as daily life, gaming, transportation technology, production technology, security technology, photography such as digital photography or video photography for artistic or documentary or technical purposes, medical technology, or science. Additionally, the present invention may be used to scan one or more objects and/or landscapes, such as generating a depth profile of an object or landscape, in fields such as architecture, surveying, archaeology, art, medicine, engineering, or manufacturing. However, other applications may be possible. Optical 3D sensing methods can generally yield unreliable results in environments with multiple reflections, such as those using biasing light sources or reflective measurement targets. Furthermore, 3D sensing methods, such as stereo cameras with imaging capabilities or triangulation using structured light, often require high computational power to solve the correspondence problem. The required computational power, particularly in mobile devices, can result in processors or Field Programmable Gate Arrays (FPGAs), heat dissipation considering ventilation requirements and the difficulty of waterproofing the housing, high costs due to power consumption, and additional measurement uncertainty. The demand for high computational power may also make it impossible to realize real-time applications, high frame rates, or standard video frame rates of 25 frames per second. Many optical devices are disclosed in the prior art using triangulation imaging methods. For example, structured light methods or stereo methods are disclosed. For example, there is a passive stereo method using two cameras at a fixed relative orientation or an active stereo method using an additional light projector. Another example is a structured light approach using one light projector and one camera at a fixed relative orientation. To determine a depth image through triangulation, the correspondence problem must first be solved. Therefore, in passive stereo camera techniques, the corresponding feature points must be sufficiently identified in the field of view of both cameras. In the structured light approach, the correspondence between a pre-stored pseudo-random light pattern and a projected pseudo-random light pattern must be determined. To ensure a definitive solution to these correspondence problems, a computational imaging algorithm, such as an algorithm that scales approximately quadratically according to the number of points in the projected point pattern, must be used. For example, in a structured light method using a stereo system comprising two detectors with a fixed relative distance, the light source projects patterns such as points, pseudo-random, random, non-periodic, or irregular point patterns. Each detector generates an image of the reflection pattern, and the task of image analysis is to identify corresponding features in the two images. Due to the fixed relative positions, a corresponding feature point selected in one of the two images is located along an epipolar line in the other image. However, solving the so-called correspondence problem can be difficult. In stereo and triangulation systems, the distances of all feature points along the epipolar line must reasonably coincide with one another. Correspondence decisions cannot be made sequentially. If one correspondence is incorrect, it affects other feature points, such as causing invisibility. This usually yields non-linear results, such as second-order scaling evaluation algorithms. For example, US 2008/0240502 A1 and US 2010/0118123 A1 describe an apparatus for mapping an object comprising a lighting assembly, wherein the lighting assembly comprises a single transparency consisting of a fixed spot pattern. A light source transmits light radiation of the single transparency to project the pattern onto the object. An image capture assembly captures an image of the pattern projected onto the object using the single transparency. A processor processes the image captured by the image capture assembly to reconstruct a three-dimensional map of the object. In addition, 3D sensing methods are known to use so-called structures or pose estimation from motion, or shapes from motion for distance determination; for example, see "Pose Estimation using Both