CN-116631044-B - Feature point position detection method and electronic device
Abstract
The invention provides a feature point position detection method and an electronic device. The method includes obtaining a plurality of first relative positions of a plurality of feature points on a specific object relative to a first imaging assembly, obtaining a plurality of second relative positions of the plurality of feature points on the specific object relative to a second imaging assembly, and estimating a current three-dimensional position of each feature point based on a historical three-dimensional position of each feature point and the plurality of second relative positions in response to determining that the first imaging assembly is unreliable. Therefore, a user positioned in front of the 3D display can not see a three-dimensional image with serious 3D crosstalk due to unreliable certain image capturing components.
Inventors
- LI YANXIAN
- HUANG SHITING
- HUANG ZHAOSHI
Assignees
- 宏碁股份有限公司
Dates
- Publication Date
- 20260508
- Application Date
- 20220211
Claims (10)
- 1. A method for detecting a feature point position of an electronic device including a first image capturing component and a second image capturing component, the method comprising: Acquiring a plurality of first relative positions of a plurality of characteristic points on a specific object relative to the first image capturing component; Acquiring a plurality of second relative positions of the plurality of feature points on the specific object relative to the second imaging assembly, and In response to determining that the first imaging assembly is unreliable, estimating a current three-dimensional position of each of the feature points based on the historical three-dimensional position of each of the feature points and the plurality of second relative positions, wherein the plurality of second relative positions includes a unit vector corresponding to each of the feature points, and estimating the current three-dimensional position of each of the feature points based on the historical three-dimensional position of each of the feature points and the plurality of second relative positions includes: acquiring a first distance between the plurality of feature points based on the historical three-dimensional positions of each feature point; estimating a second distance between the second imaging element and each of the feature points based on the unit vector corresponding to each of the feature points and the first distance of the plurality of feature points from each other; Estimating the current three-dimensional position of each feature point based on the three-dimensional position of the second imaging assembly and the second distance corresponding to each feature point.
- 2. The method of claim 1, wherein the step of obtaining the plurality of first relative positions of the plurality of feature points on the particular object with respect to the first imaging assembly comprises: Capturing, by the first imaging assembly, a first image of the particular object; the plurality of feature points are identified in the first image, and the plurality of first relative positions of the plurality of feature points relative to the first imaging assembly are determined accordingly.
- 3. The method according to claim 2, comprising: in response to determining that the number of the plurality of feature points in the first image is below a preset threshold, determining that the first imaging assembly is unreliable, and And in response to determining that the number of the plurality of feature points in the first image is not lower than the preset threshold, determining that the first image capturing component is reliable.
- 4. The method of claim 1, further comprising: And in response to determining that the first imaging assembly and the second imaging assembly are both reliable, estimating the current three-dimensional position of each feature point based on the plurality of first relative positions and the plurality of second relative positions.
- 5. The method of claim 1, wherein the plurality of feature points comprises a first feature point, a second feature point, and a third feature point, the second imaging component having first, second, and third unit vectors corresponding to the first, second, and third feature points, respectively, and estimating the second distance between the second imaging component and each of the feature points based on the unit vectors corresponding to each of the feature points and the first distance of the plurality of feature points from each other comprises: Establishing a plurality of relational expressions based on the first unit vector, the second unit vector, the third unit vector, the first distance between the first feature point and the second feature point, the first distance between the second feature point and the third feature point, the first distance between the first feature point and the third feature point, the second distance between the second imaging assembly and the first feature point, the second distance between the second imaging assembly and the second feature point, and the second distance between the second imaging assembly and the third feature point; Estimating the second distance between the second imaging component and the first feature point, the second distance between the second imaging component and the second feature point, and the second distance between the second imaging component and the third feature point based on the plurality of relationships.
- 6. The method of claim 5, wherein the plurality of relationships comprises: Wherein the method comprises the steps of For the first unit vector corresponding to the first feature point, For the second unit vector corresponding to the second feature point, For the first unit vector corresponding to the third feature point, a is the first distance between the second feature point and the third feature point, b is the first distance between the first feature point and the third feature point, c is the first distance between the first feature point and the second feature point, x is the second distance between the second imaging assembly and the first feature point, y is the second distance between the second imaging assembly and the second feature point, and z is the second distance between the second imaging assembly and the third feature point.
- 7. The method of claim 5, wherein the plurality of second relative positions are taken at a t-th time point, the historical three-dimensional position of each of the feature points is taken at a t-k-th time point, t is an index value, k is a positive integer, and the relative positions of the first feature point, the second feature point, and the third feature point with respect to each other are constant between the t-th time point and the t-k-th time point.
- 8. The method of claim 1, wherein the electronic device is a three-dimensional display and the first and second imaging components belong to a two-pupil camera on the three-dimensional display.
- 9. The method of claim 8, wherein the particular object is a human face, and after the step of estimating the current three-dimensional position of each feature point based on the historical three-dimensional position of each feature point and the plurality of second relative positions, further comprising: acquiring a plurality of eye feature points corresponding to both eyes on the face, and And determining three-dimensional display content of the three-dimensional display based on the plurality of eye feature points.
- 10. An electronic device, comprising: A first image capturing component; a second image capturing assembly, and A processor coupled to the first and second imaging assemblies and configured to perform: Acquiring a plurality of first relative positions of a plurality of characteristic points on a specific object relative to the first image capturing component; Acquiring a plurality of second relative positions of the plurality of feature points on the specific object relative to the second imaging assembly, and In response to determining that the first imaging assembly is unreliable, estimating a current three-dimensional position of each of the feature points based on the historical three-dimensional position of each of the feature points and the plurality of second relative positions, wherein the plurality of second relative positions includes a unit vector corresponding to each of the feature points, and estimating the current three-dimensional position of each of the feature points based on the historical three-dimensional position of each of the feature points and the plurality of second relative positions includes: acquiring a first distance between the plurality of feature points based on the historical three-dimensional positions of each feature point; estimating a second distance between the second imaging element and each of the feature points based on the unit vector corresponding to each of the feature points and the first distance of the plurality of feature points from each other; Estimating the current three-dimensional position of each feature point based on the three-dimensional position of the second imaging assembly and the second distance corresponding to each feature point.
Description
Feature point position detection method and electronic device Technical Field The present invention relates to an image processing mechanism, and more particularly, to a feature point detection method and an electronic device. Background The current naked eye 3D display will firstly place the pixels of the left eye and the right eye at the pixel positions corresponding to the display panel, and then the liquid crystal in the 3D lens controls the light path to project the images of the left eye and the right eye into the eyes of the pair respectively. Because of the focus to the left and right eyes, 3D lenses typically have an arcuate design so that images of the left (right) eye can be focused and projected into the left (right) eye. However, due to the refractive path, some light may be projected into the wrong eye. That is, the image of the left (right) eye is misplaced into the right (left) eye, and this phenomenon is called 3D crosstalk (cross talk). Generally, naked eye 3D displays are often configured with eye tracking systems for providing corresponding images to both eyes after the positions of the eyes of the user are obtained. Currently, most of the commonly used eye tracking methods use a two-pupil camera to perform face recognition, and use triangulation to obtain two eye positions. However, in some cases, the face recognition performed by the pupil camera may not accurately measure the positions of the eyes due to insufficient acquired facial feature points, and thus may affect the quality of subsequent three-dimensional image presentation. Disclosure of Invention In view of the above, the present invention provides a feature point position detection method and an electronic device, which can be used to solve the above-mentioned technical problems. The invention provides a feature point position detection method which is suitable for an electronic device comprising a first image capturing component and a second image capturing component and comprises the steps of obtaining a plurality of first relative positions of a plurality of feature points on a specific object relative to the first image capturing component, obtaining a plurality of second relative positions of the plurality of feature points on the specific object relative to the second image capturing component, and estimating a current three-dimensional position of each feature point based on a historical three-dimensional position of each feature point and the plurality of second relative positions in response to judging that the first image capturing component is unreliable. The invention provides an electronic device, which comprises a first image capturing component, a second image capturing component and a processor. The processor is coupled to the first image capturing device and the second image capturing device and configured to obtain a plurality of first relative positions of a plurality of feature points on a specific object with respect to the first image capturing device, obtain a plurality of second relative positions of the plurality of feature points on the specific object with respect to the second image capturing device, and estimate a current three-dimensional position of each feature point based on a historical three-dimensional position of each feature point and the plurality of second relative positions in response to determining that the first image capturing device is unreliable. Drawings The accompanying drawings are included to provide a further understanding of the invention, and are incorporated in and constitute a part of this specification. The drawings illustrate embodiments of the invention and, together with the description, serve to explain the principles of the invention. FIG. 1 is a schematic diagram of an electronic device according to an embodiment of the invention; FIG. 2 is a flow chart of a feature point location detection method according to an embodiment of the invention; FIG. 3 is a schematic diagram of facial feature points shown in accordance with an embodiment of the present invention; FIG. 4 is a schematic diagram illustrating estimating the current three-dimensional position of feature points according to an embodiment of the present invention; FIG. 5 is a diagram illustrating an application scenario for determining the current three-dimensional position of each feature point according to an embodiment of the present invention. Detailed Description Reference will now be made in detail to the exemplary embodiments of the present invention, examples of which are illustrated in the accompanying drawings. Wherever possible, the same reference numbers will be used throughout the drawings and the description to refer to the same or like parts. Referring to fig. 1, a schematic diagram of an electronic device according to an embodiment of the invention is shown. In various embodiments, the electronic device 100 may be implemented as a variety of smart devices and/or computer devices. In some embo