EP-4737985-A1 - CALIBRATING HEADS-UP DISPLAY USING INFRARED-RESPONSIVE MARKERS
Abstract
Disclosed is a system comprising: infrared (IR) light source(s) (104); IR camera(s) (106); an optical combiner (108, 208), wherein a set of IR-responsive markers (128) are located in at least one of: (i) within the optical combiner, (ii) on a semi-reflective surface (114) of the optical combiner; and processor(s) configured to: control the IR camera(s) to capture IR image(s) of the optical combiner, whilst controlling the IR light source(s) to emit IR light (112) towards the optical combiner; detect at least a subset of the set of IR-responsive markers in the IR image(s); for a given IR-responsive marker detected in the IR image(s), determine a deformation in a shape of the given IR-responsive marker with respect to a reference shape; and determine a curvature of the optical combiner, based on respective deformations in shapes of IR-responsive markers in at least said subset.
Inventors
- CARLSSON, THOMAS
- VEHKAPERÄ, VILLE
Assignees
- Distance Technologies Oy
Dates
- Publication Date
- 20260506
- Application Date
- 20251003
Claims (15)
- A system (100) comprising: at least one infrared (IR) light source (104); at least one IR camera (106); an optical combiner (108, 208) arranged on an optical path of a light field display unit (122, 204) and on an optical path of a real-world light field (206) of a real-world environment (210), wherein a semi-reflective surface (114) of the optical combiner faces the at least one IR camera, and wherein a set of IR-responsive markers (128) are located in at least one of: (i) within the optical combiner, (ii) on the semi-reflective surface of the optical combiner; and at least one processor (110, 214) configured to: control the at least one IR camera to capture at least one IR image of the optical combiner, whilst controlling the at least one IR light source to emit IR light (112) towards the optical combiner; detect at least a subset of the set of IR-responsive markers in the at least one IR image; for a given IR-responsive marker detected in the at least one IR image, determine a deformation in a shape of the given IR-responsive marker as captured in the at least one IR image with respect to a reference shape of the given IR-responsive marker; and determine a curvature of the optical combiner, based on respective deformations in shapes of IR-responsive markers in at least said subset of the set.
- The system (100) of claim 1, wherein the at least one IR camera (106) comprises at least a first IR camera (126a) and a second IR camera (126b) whose fields of view overlap at least partially, wherein the at least one processor (110, 214) is configured to: detect at least one IR-responsive marker in a first IR image and a second IR image that are captured simultaneously via the first IR camera and the second IR camera, respectively; determine a position of the at least one IR-responsive marker, based on a first pose and a second pose of the first IR camera and the second IR camera from which the first IR image and the second IR image are captured, respectively; and determine the curvature of the optical combiner (108, 208), based further on a change in the position of the at least one IR-responsive marker.
- The system (100) of claim 2, wherein the at least one processor (110, 214) is configured to: detect, in the first IR image and the second IR image, at least one object that is in a proximity of the optical combiner (108, 208); determine a position of the at least one object, based on the first pose of the first IR camera (126a) and the second pose of the second IR camera (126b); determine a relative position of the at least one IR-responsive marker with respect to the at least one object, based on the position of the at least one IR-responsive marker and the position of the at least one object; and determine the curvature of the optical combiner, based further on the relative position of the at least one IR-responsive marker with respect to the at least one object.
- The system (100) of claim 3, wherein the at least one processor (110, 214) is configured to determine a change in at least one of: a position, an orientation, of the optical combiner (108, 208), based on a change in the relative position of the at least one IR-responsive marker with respect to the at least one object.
- The system (100) of any of the preceding claims, further comprising at least one additional camera (116), wherein the at least one IR camera (106) lies in a field of view of the at least one additional camera, and wherein the at least one processor (110, 214) is configured to: control the at least one additional camera to capture an image of the at least one IR camera; detect the at least one IR camera in the image; determine a change in at least one of: a position, an orientation, of the at least one IR camera, based on a current location of the at least one IR camera in the image, and a reference location of the at least one IR camera in a reference image; and determine the curvature of the optical combiner (108, 208), based further on the change in the at least one of: the position, the orientation, of the at least one IR camera.
- The system (100) of any of the preceding claims, further comprising tracking means (120, 212) and the light field display unit (122, 204), wherein the at least one processor (110, 214) is configured to: determine a relative position of a first eye (216a) and of a second eye (216b) of at least one user (218) with respect to the optical combiner (108, 208), by utilising the tracking means; generate an input to be employed at the light field display unit for producing a synthetic light field (202), based on the relative position of the first eye and of the second eye of the at least one user with respect to the optical combiner, and the curvature of the optical combiner; and employ the input at the light field display unit to produce the synthetic light field, wherein the optical combiner is employed to reflect a first part and a second part of the synthetic light field towards the first eye and the second eye of the at least one user, respectively, whilst optically combining the first part and the second part of the synthetic light field with the real-world light field (206).
- The system (100) of claim 6, wherein the tracking means (120, 212) is implemented at least partially using the at least one IR camera (106).
- The system (100) of claim 6 or 7, further comprising: at least one additional IR camera (118) facing the light field display unit (122, 204); and optionally, at least one additional IR light source (124), wherein an additional set of IR-responsive markers are located on at least one optical layer of the light field display unit, and wherein the at least one processor (110, 214) is configured to: control the at least one additional IR camera to capture at least one additional IR image of the light field display unit, whilst controlling the at least one IR light source (104) or the at least one additional IR light source to emit IR light (112) towards the light field display unit; detect at least a subset of the additional set of IR-responsive markers in the at least one additional IR image; for a given IR-responsive marker detected in the at least one additional IR image, determine a deformation in a shape of the given IR-responsive marker as captured in the at least one additional IR image with respect to a reference shape of the given IR-responsive marker; and determine a curvature of the at least one optical layer of the light field display unit, based on respective deformations in shapes of IR-responsive markers in at least said subset of the additional set, wherein the input to be employed at the light field display unit is generated based further on the curvature of the at least one optical layer of the light field display unit.
- The system (100) of any of the preceding claims, wherein the set of IR-responsive markers (128) are implemented as absorptive IR-blocking markers, and wherein the semi-reflective surface (114) is IR reflecting, wherein the at least one processor (110, 214) is configured to: detect a face of at least one user (218) in the at least one IR image; and determine a relative position of a head or eyes of the at least one user with respect to the optical combiner (108, 208), based on a location of the face in the at least one IR image.
- A method comprising: controlling at least one infrared (IR) camera (106) to capture at least one IR image of an optical combiner (108, 208), whilst controlling at least one IR light source (104) to emit IR light (112) towards the optical combiner, wherein the optical combiner is arranged on an optical path of a light field display unit (122, 204) and on an optical path of a real-world light field (206) of a real-world environment (210), wherein a semi-reflective surface (114) of the optical combiner faces the at least one IR camera, and wherein a set of IR-responsive markers (128) are located in at least one of: (i) within the optical combiner, (ii) on the semi-reflective surface of the optical combiner; detecting at least a subset of the set of IR-responsive markers in the at least one IR image; for a given IR-responsive marker detected in the at least one IR image, determining a deformation in a shape of the given IR-responsive marker as captured in the at least one IR image with respect to a reference shape of the given IR-responsive marker; and determining a curvature of the optical combiner, based on respective deformations in shapes of IR-responsive markers in at least said subset of the set.
- The method of claim 10, wherein the at least one IR camera (106) comprises at least a first IR camera (126a) and a second IR camera (126b) whose fields of view overlap at least partially, wherein the method further comprises: detecting at least one IR-responsive marker in a first IR image and a second IR image that are captured simultaneously via the first IR camera and the second IR camera, respectively; determining a position of the at least one IR-responsive marker, based on a first pose and a second pose of the first IR camera and the second IR camera from which the first IR image and the second IR image are captured, respectively; and determining the curvature of the optical combiner (108, 208), based further on a change in the position of the at least one IR-responsive marker.
- The method of claim 11, further comprising: detecting, in the first IR image and the second IR image, at least one object that is in a proximity of the optical combiner (108, 208); determining a position of the at least one object, based on the first pose of the first IR camera (126a) and the second pose of the second IR camera (126b); determining a relative position of the at least one IR-responsive marker with respect to the at least one object, based on the position of the at least one IR-responsive marker and the position of the at least one object; and determining the curvature of the optical combiner, based further on the relative position of the at least one IR-responsive marker with respect to the at least one object.
- The method of claim 12, further comprising determining a change in at least one of: a position, an orientation, of the optical combiner (108, 208), based on a change in the relative position of the at least one IR-responsive marker with respect to the at least one object.
- The method of any of claims 10-13, further comprising: controlling at least one additional camera (116) to capture an image of the at least one IR camera (106), wherein the at least one IR camera lies in a field of view of the at least one additional camera; detecting the at least one IR camera in the image; determining a change in at least one of: a position, an orientation, of the at least one IR camera, based on a current location of the at least one IR camera in the image, and a reference location of the at least one IR camera in a reference image; and determining the curvature of the optical combiner (108, 208), based further on the change in the at least one of: the position, the orientation, of the at least one IR camera.
- The method of any of claims 10-14, wherein the set of IR-responsive markers (128) are implemented as absorptive IR-blocking markers, and wherein the semi-reflective surface (114) is IR reflecting, wherein the method further comprises: detecting a face of at least one user (218) in the at least one IR image; and determining a relative position of a head or eyes of the at least one user with respect to the optical combiner (108, 208), based on a location of the face in the at least one IR image.
Description
TECHNICAL FIELD The present disclosure relates to systems for calibrating heads-up displays using infrared-responsive markers. The present disclosure also relates to methods for calibrating heads-up displays using infrared-responsive markers. BACKGROUND Glasses-free augmented-reality (AR) systems (such as automotive head-up displays (HUDs) or similar) have emerged as a significant advancement for presenting visual information to users without diverting their attention from their primary tasks, for example, such as driving a vehicle. Some HUDs utilise an optical combiner (for example, in a form of a windshield of the vehicle) which typically reflects a corresponding part of light emanating from a display towards a given eye of a user, in order to display the visual information to the user. Furthermore, said HUDs are typically designed for single-user scenarios, primarily due to their limited fields of view, and consequently, have small eye boxes (namely, viewing areas). In such a case, either a curvature of the optical combiner does not pose any significant issues, or even where said curvature may impact an overall image quality, static and pre-defined curvature compensation techniques are employed during AR rendering. However, when a field-of-view of the display and a viewing area increase in an HUD, the curvature of the optical combiner becomes highly significant, as it directly influences an accuracy of AR rendering. Further, a long-term reliability of the optical combiner faces significant challenges in real-world applications, particularly, in dynamic environments such as moving vehicles. Over an operational lifetime of a vehicle, various types of stresses (for example, such as thermal expansion and contraction, tensile stress-induced deformations, mechanical wear and tear, and the like) are exerted on the vehicle (particularly, on the optical combiner). This results in subtle yet consequential alterations in at least one of: a position, an orientation, the curvature, of the optical combiner. Some of these alterations also occur when the windshield of the vehicle (namely, the optical combiner) needs to be replaced or repaired. Thus, in such a case, employment of said static and pre-defined curvature compensation techniques is inefficient and unreliable. Resultantly, spatial reconstruction of 3D scenes and objects is compromised (namely, becomes inaccurate), and an overall viewing experience of the user is adversely affected, and becomes unrealistic and non-immersive. Therefore, in light of the foregoing discussion, there exists a need to overcome the aforementioned drawbacks. SUMMARY The present disclosure seeks to provide a system and a method which facilitate a simple, yet accurate and reliable way to determine a curvature of the optical combiner, by way of determining deformations in shapes of IR-responsive markers as captured in infrared (IR) images with respect to respective reference shapes of said IR-responsive markers. The aim of the present disclosure is achieved by a system and a method which incorporate calibration of a heads-up display using infrared-responsive markers, as defined in the appended independent claims to which reference is made to. Advantageous features are set out in the appended dependent claims. Throughout the description and claims of this specification, the words "comprise", "include", "have", and "contain" and variations of these words, for example "comprising" and "comprises", mean "including but not limited to", and do not exclude other components, items, integers or steps not explicitly disclosed also to be present. Moreover, the singular encompasses the plural unless the context otherwise requires. In particular, where the indefinite article is used, the specification is to be understood as contemplating plurality as well as singularity, unless the context requires otherwise. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1A and FIG. 1B illustrate an exemplary scenario of implementing a system for calibrating a heads-up display using infrared-responsive markers, in accordance with an embodiment of the present disclosure;FIG. 2 illustrates an exemplary scenario in which a synthetic light field is produced using a light field display unit, and is optically combined with a real-world light field using an optical combiner, in accordance with an embodiment of the present disclosure; andFIG. 3 illustrates steps of a method for calibrating a heads-up display using infrared-responsive markers, in accordance with an embodiment of the present disclosure. DETAILED DESCRIPTION OF EMBODIMENTS The following detailed description illustrates embodiments of the present disclosure and ways in which they can be implemented. Although some modes of carrying out the present disclosure have been disclosed, those skilled in the art would recognize that other embodiments for carrying out or practising the present disclosure are also possible. In a first aspect, an embodiment of the present disclosure