US-12617341-B1 - Systems and methods for context-aware visibility improvements in low-visibility conditions
Abstract
A system for improving visibility in low-visibility conditions, being implemented in vehicle, including visible-light camera(s); depth camera(s); display unit; optical combiner; and processor(s). Processor configured to: obtain information indicative of relative position of head or eyes of user(s); control visible-light and depth camera(s) to capture, contemporaneously, visible-light and depth images, of region of real-world environment in front of vehicle. For given visible-light image (GVLI), processor identifies in corresponding depth image, image segment(s) that represents object(s) present in region of real-world environment. Processor identifies corresponding image segment(s) in GVLI whose visual information is any one of: occluded, degraded; and generates image by digitally superimposing virtual element(s) on corresponding image segment(s) of GVLI; and displays image via display unit to produce synthetic light field (SLF), optical combiner reflects SLF towards eyes of user(s).
Inventors
- Joona Petrell
- Thomas Carlsson
Assignees
- DISTANCE TECHNOLOGIES Oy
Dates
- Publication Date
- 20260505
- Application Date
- 20250528
Claims (15)
- 1 . A system for improving visibility in low-visibility conditions, the system being implemented in a vehicle, the system comprising: at least one visible-light camera; at least one depth camera; a display unit; an optical combiner arranged on an optical path of the display unit and on an optical path of a real-world light field of a real-world environment, wherein the real-world light field is incoming via a windshield of the vehicle; and at least one processor configured to: obtain information indicative of a relative position of a head or eyes of at least one user with respect to a semi-reflective surface of the optical combiner; control the at least one visible-light camera and the at least one depth camera to capture, contemporaneously, a plurality of visible-light images and a plurality of depth images, of a region of the real-world environment that is front of the vehicle, respectively; for a given visible-light image from amongst the plurality of visible-light images, identify, in a corresponding depth image from amongst the plurality of depth images, at least one image segment that represents at least one object present in the region of the real-world environment; identify at least one corresponding image segment in the given visible-light image whose visual information is any one of: occluded, degraded, based on: (i) the at least one image segment that is identified in the corresponding depth image, and (ii) the relative position of the head or the eyes of the at least one user with respect to the semi-reflective surface of the optical combiner; and generate an image by digitally superimposing at least one virtual element on the at least one corresponding image segment of the given visible-light image; and display the image via the display unit to produce a synthetic light field, wherein the optical combiner is employed to reflect the synthetic light field towards the eyes of the at least one user, whilst optically combining the synthetic light field with the real-world light field; wherein the image is a light field image, and the display unit is a light field display unit comprising a multiscopic optical element configured to direct a first part and a second part of the synthetic light field towards a first eye and a second eye of an individual one of the at least one user, via the optical combiner, presenting a first image and a second image to the first eye and the second eye, respectively, wherein the at least one processor is configured to generate the light field image by utilising the first image and the second image.
- 2 . The system of claim 1 , further comprising at least one thermal imaging camera controlled to capture a plurality of thermal images of the region of the real-world environment that is front of the vehicle, wherein the at least one processor is configured to: for the given visible-light image, identify, in a corresponding thermal image from amongst the plurality of thermal images, at least one image segment that represents the at least one object present in the region of the real-world environment; and identify the at least one corresponding image segment in the given visible-light image, based further on the at least one image segment that is identified in the corresponding thermal image.
- 3 . The system of claim 1 , further comprising at least one light sensor configured to detect illuminances in sub-regions of the region in the real-world environment, when capturing the given visible-light image and the corresponding depth image, wherein the at least one processor is further configured to identify the at least one corresponding image segment in the given visible-light image, based further on the detected illuminances in the sub-regions of the region in the real-world environment.
- 4 . The system of claim 1 , further comprising at least one sensor configured to detect the presence of a substance on at least one portion of a surface of the windshield, wherein the at least one processor is further configured to identify the at least one corresponding image segment in the given visible-light image, based on the location of the at least one portion of the surface of the windshield.
- 5 . The system of claim 1 , wherein the at least one processor is further configured to: obtain a three-dimensional (3D) model of the real-world environment, from a data repository; and environment and a current location of the vehicle.
- 6 . The system of claim 1 , further comprising a geospatial navigation device configured to generate real-time navigation information for the vehicle, wherein the at least one processor is further configured to generate the at least one virtual element, based on the real-time navigation information.
- 7 . The system of claim 1 , wherein the at least one processor is further configured to: obtain information indicative of a current weather condition in the real-world environment; and perform the steps of obtaining the information indicative of the relative position, controlling the at least one visible-light camera and the at least one depth camera, identifying the at least one image segment, identifying the at least one corresponding image segment, generating the image, and displaying the image, based on the weather condition in the real-world environment.
- 8 . The system of claim 7 , further comprising at least one weather sensor, wherein the at least one processor is further configured to determine the current weather condition in the real-world environment, based on current sensor data collected by at least one weather sensor and at least visual information represented in the given visible-light image, by utilising at least one neural network.
- 9 . The system of claim 1 , wherein the at least one processor is further configured to dynamically adjust at least one of: a brightness, a colour, a contrast, an illuminance, a sharpness, a transparency, of the at least one virtual element, based on at least one of: (i) a current weather condition in the real-world environment, (ii) an optical depth of the at least one object from the eyes of the at least one user.
- 10 . The system of claim 1 , wherein the at least one processor is further configured to identify the at least one corresponding image segment whose visual information is degraded, based further on at least one of: a contrast level, a blur level, a brightness level, a colour desaturation level, of the given corresponding image segment.
- 11 . A system for improving visibility in low-visibility conditions, the system being implemented in a vehicle, the system comprising: at least one visible-light camera; at least one depth camera; a display unit; an optical combiner arranged on an optical path of the display unit and on an optical path of a real-world light field of a real-world environment, wherein the real-world light field is incoming via a windshield of the vehicle; and at least one processor configured to: obtain information indicative of a relative position of a head or eyes of at least one user with respect to a semi-reflective surface of the optical combiner; control the at least one visible-light camera and the at least one depth camera to capture, contemporaneously, a plurality of visible-light images and a plurality of depth images, of a region of the real-world environment that is front of the vehicle, respectively; for a given visible-light image from amongst the plurality of visible-light images, identify, in a corresponding depth image from amongst the plurality of depth images, at least one image segment that represents at least one object present in the region of the real-world environment; identify at least one corresponding image segment in the given visible-light image whose visual information is any one of: occluded, degraded, based on: (i) the at least one image segment that is identified in the corresponding depth image, and (ii) the relative position of the head or the eyes of the at least one user with respect to the semi-reflective surface of the optical combiner; generate an image by digitally superimposing at least one virtual element on the at least one corresponding image segment of the given visible-light image; and display the image via the display unit to produce a synthetic light field, wherein the optical combiner is employed to reflect the synthetic light field towards the eyes of the at least one user, whilst optically combining the synthetic light field with the real-world light field; wherein when identifying the at least one image segment, the at least one processor is further configured to: process, by utilising a movement filtering technique, the corresponding depth image; and identify the at least one object as a moving object present in the region of the real-world environment, wherein when generating the image, the at least one processor is configured to generate a virtual element corresponding to the moving object.
- 12 . A system for improving visibility in low-visibility conditions, the system being implemented in a vehicle, the system comprising: at least one visible-light camera; at least one depth camera; a display unit; an optical combiner arranged on an optical path of the display unit and on an optical path of a real-world light field of a real-world environment, wherein the real-world light field is incoming via a windshield of the vehicle; and at least one processor configured to: obtain information indicative of a relative position of a head or eyes of at least one user with respect to a semi-reflective surface of the optical combiner; control the at least one visible-light camera and the at least one depth camera to capture, contemporaneously, a plurality of visible-light images and a plurality of depth images, of a region of the real-world environment that is front of the vehicle, respectively; for a given visible-light image from amongst the plurality of visible-light images, identify, in a corresponding depth image from amongst the plurality of depth images, at least one image segment that represents at least one object present in the region of the real-world environment; identify at least one corresponding image segment in the given visible-light image whose visual information is any one of: occluded, degraded, based on: (i) the at least one image segment that is identified in the corresponding depth image, and (ii) the relative position of the head or the eyes of the at least one user with respect to the semi-reflective surface of the optical combiner; and generate an image by digitally superimposing at least one virtual element on the at least one corresponding image segment of the given visible-light image; and display the image via the display unit to produce a synthetic light field, wherein the optical combiner is employed to reflect the synthetic light field towards the eyes of the at least one user, whilst optically combining the synthetic light field with the real-world light field; the system further comprising a tracker, wherein the at least one processor is further configured to: determine gaze directions of the eyes of the at least one user, by utilising the tracker; determine a focus depth at which the at least one user is gazing, based on the gaze directions of the eyes; detect when a difference between the focus depth and an optical depth at which the at least one virtual element is being presented via the synthetic light field is greater than a predefined threshold difference; and when it is detected that said difference is greater than the predefined threshold difference, skip generating the image and displaying the image.
- 13 . A system for improving visibility in low-visibility conditions, the system being implemented in a vehicle, the system comprising: at least one visible-light camera; at least one depth camera; a display unit; an optical combiner arranged on an optical path of the display unit and on an optical path of a real-world light field of a real-world environment, wherein the real-world light field is incoming via a windshield of the vehicle; and at least one processor configured to: obtain information indicative of a relative position of a head or eyes of at least one user with respect to a semi-reflective surface of the optical combiner; control the at least one visible-light camera and the at least one depth camera to capture, contemporaneously, a plurality of visible-light images and a plurality of depth images, of a region of the real-world environment that is front of the vehicle, respectively; for a given visible-light image from amongst the plurality of visible-light images, identify, in a corresponding depth image from amongst the plurality of depth images, at least one image segment that represents at least one object present in the region of the real-world environment; identify at least one corresponding image segment in the given visible-light image whose visual information is any one of: occluded, degraded, based on: (i) the at least one image segment that is identified in the corresponding depth image, and (ii) the relative position of the head or the eyes of the at least one user with respect to the semi-reflective surface of the optical combiner; generate an image by digitally superimposing at least one virtual element on the at least one corresponding image segment of the given visible-light image; and display the image via the display unit to produce a synthetic light field, wherein the optical combiner is employed to reflect the synthetic light field towards the eyes of the at least one user, whilst optically combining the synthetic light field with the real-world light field; the system further comprising a tracker, wherein the at least one virtual element comprises a plurality of virtual elements, and wherein the at least one processor is further configured to: determine gaze directions of the eyes of the at least one user, by utilising the tracker; determine a focus depth at which the at least one user is gazing, based on the gaze directions of the eyes; detect a difference between the focus depth and an optical depth at which respective ones of the plurality of virtual elements are to be presented via the synthetic light field; and perform digital superimposition of only those virtual elements on the at least one corresponding image segment for which said difference is less than a predefined threshold difference, when generating the image.
- 14 . A system for improving visibility in low-visibility conditions, the system being implemented in a vehicle, the system comprising: at least one visible-light camera; at least one depth camera; a display unit; an optical combiner arranged on an optical path of the display unit and on an optical path of a real-world light field of a real-world environment, wherein the real-world light field is incoming via a windshield of the vehicle; and at least one processor configured to: obtain information indicative of a relative position of a head or eyes of at least one user with respect to a semi-reflective surface of the optical combiner; control the at least one visible-light camera and the at least one depth camera to capture, contemporaneously, a plurality of visible-light images and a plurality of depth images, of a region of the real-world environment that is front of the vehicle, respectively; for a given visible-light image from amongst the plurality of visible-light images, identify, in a corresponding depth image from amongst the plurality of depth images, at least one image segment that represents at least one object present in the region of the real-world environment; identify at least one corresponding image segment in the given visible-light image whose visual information is any one of: occluded, degraded, based on: (i) the at least one image segment that is identified in the corresponding depth image, and (ii) the relative position of the head or the eyes of the at least one user with respect to the semi-reflective surface of the optical combiner; generate an image by digitally superimposing at least one virtual element on the at least one corresponding image segment of the given visible-light image; and display the image via the display unit to produce a synthetic light field, wherein the optical combiner is employed to reflect the synthetic light field towards the eyes of the at least one user, whilst optically combining the synthetic light field with the real-world light field; wherein prior to identifying the at least one image segment in the corresponding depth image, and identifying the at least one corresponding image segment in the given visible-light image, the at least one processor is further configured to: reproject the corresponding depth image, from a camera pose of the at least one depth camera to a head pose of the at least one user or an eye pose of a given eye of the at least one user; and reproject the given visible-light image, from a camera pose of the at least one visible-light camera to the head pose of the at least one user or the eye pose of the given eye of the at least one user, based on depth information in the corresponding depth image.
- 15 . A method for improving visibility in low-visibility conditions, the method being implemented in a vehicle, the method comprising: obtaining information indicative of a relative position of a head or eyes of at least one user with respect to a semi-reflective surface of an optical combiner, the optical combiner being arranged on an optical path of a display unit and on an optical path of a real-world light field of a real-world environment, the real-world light field incoming via a windshield of the vehicle; controlling at least one visible-light camera and at least one depth camera to capture, contemporaneously, a plurality of visible-light images and a plurality of depth images, of a region of the real-world environment that is front of the vehicle, respectively; for a given visible-light image from amongst the plurality of visible-light images, identifying, in a corresponding depth image from amongst the plurality of depth images, at least one image segment that represents at least one object present in the region of the real-world environment; identifying at least one corresponding image segment in the given visible-light image whose visual information is any one of: occluded, degraded, based on: (i) the at least one image segment that is identified in the corresponding depth image, and (ii) the relative position of the head or the eyes of the at least one user with respect to the semi-reflective surface of the optical combiner; and generating an image by digitally superimposing at least one virtual element on the at least one corresponding image segment of the given visible-light image; and displaying the image via the display unit to produce a synthetic light field, wherein the optical combiner is employed to reflect the synthetic light field towards the eyes of the at least one user, whilst optically combining the synthetic light field with the real-world light field; wherein the image is a light field image, and the display unit is a light field display unit comprising a multiscopic optical element configured to direct a first part and a second part of the synthetic light field towards a first eye and a second eye of an individual one of the at least one user, via the optical combiner, presenting a first image and a second image to the first eye and the second eye, respectively, and the light field image is generated by utilising the first image and the second image.
Description
TECHNICAL FIELD The present disclosure relates to systems for improving visibility in low-visibility conditions. Moreover, the present disclosure relates to methods for improving visibility in low-visibility conditions. BACKGROUND Visibility plays an essential role in ensuring safety and situational awareness in various environments, particularly in transportation, navigation, and the like. Clear visibility is essential for a user (for example, a driver, a machine operator, and the like) to perceive their surroundings accurately and respond to potential hazards. However, environmental factors for example, such as snow, frost, rain, fog, haze, mist, smoke, dust, pollen, air pollution, condensed windshield, and other forms of aerosols in air scatter light, making it difficult for the user to detect obstacles, road markings, pedestrians, and the like. Additionally, sudden transitions between bright and dark environments, such as entering a tunnel or moving from daylight to shadowed areas, can cause temporary vision impairment due to a time required for a human eye to adjust to changing light conditions. Such factors significantly increase risk of accidents and reduce the user's ability to operate vehicles or machinery safely. Conventionally, to address these visibility challenges, several methods have been introduced to enhance the user's ability to see in difficult conditions. For example, vehicles are commonly equipped with headlights and fog lights to improve visibility in low-light environments, while defoggers, wipers, and anti-fog coatings help to maintain a clear windshield by preventing condensation and frost buildup. Polarized sunglasses and visors reduce glare from direct sunlight, thereby improving visual comfort. Additionally, modern automotive systems incorporate infrared and Light Detection and Ranging (LIDAR)-based night vision technologies, which detect obstacles and display them on dashboard screens, providing additional awareness in poor visibility conditions. However, the aforesaid existing solutions face significant limitations. Firstly, lighting-based technology become ineffective in extreme weather conditions where light is scattered or reflected, thereby reducing contrast instead of improving visibility. Secondly, windshield treatments only address surface obstructions but do not help to detect hidden objects (such as pedestrians, stationary hazards and the like) beyond a given field of view. Thirdly, night vision technology provide useful information but require the user to look away from a road to check a separate display, which can be distracting and impractical in fast-moving situations. Furthermore, these systems do not adapt dynamically to different types of visibility impairments, such as physical obstructions or sudden light changes. Therefore, in light of the foregoing discussion, there exists a need to overcome the aforementioned drawbacks. SUMMARY The aim of the present disclosure is to provide a system and a method for improving visibility for a user in low-visibility conditions of a real-world environment by digitally superimposing virtual elements derived from depth data. The aim of the present disclosure is achieved by a system and a method for improving visibility in low-visibility conditions, as defined in the appended independent claims to which reference is made to. Advantageous features are set out in the appended dependent claims. Throughout the description and claims of this specification, the words “comprise”, “include”, “have”, and “contain” and variations of these words, for example “comprising” and “comprises”, mean “including but not limited to”, and do not exclude other components, items, integers or steps not explicitly disclosed also to be present. Moreover, the singular encompasses the plural unless the context otherwise requires. In particular, where the indefinite article is used, the specification is to be understood as contemplating plurality as well as singularity, unless the context requires otherwise. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 illustrates a simplified exemplary implementation of a system for improving visibility in low-visibility conditions, in accordance with an embodiment of the present disclosure; FIG. 2A illustrates an exemplary representation of a region of a real-world environment that is in front of the vehicle, displayed on a windshield of said vehicle, FIG. 2B illustrates exemplary representations of a given visible-light image of said region of the real-world environment that is in front of the vehicle, and FIGS. 2C-E illustrate different scenarios of a region of a real-world environment, in which at least one virtual element is being digitally superimposed, in accordance with an embodiment of the present disclosure; and FIG. 3 illustrates steps of a method for improving visibility in low-visibility conditions, in accordance with an embodiment of the present disclosure. DETAILED DESCRIPTION OF EMBODIMENTS The following detail