US-12627889-B2 - Method and display apparatus incorporating gaze-based motion blur compensation
Abstract
Disclosed is a method that includes detecting a beginning of a movement of a user's gaze by processing gaze-tracking data, collected by a gaze-tracking means; predicting a motion blur in an image which is to be captured by at least one camera during the movement of the user's gaze, using a portion of the gaze-tracking data that corresponds to the beginning of the movement of the user's gaze; and compensating for the predicted motion blur while capturing the image by controlling the at least one camera.
Inventors
- Ville Timonen
- Kalle KARHU
Assignees
- Varjo Technologies Oy
Dates
- Publication Date
- 20260512
- Application Date
- 20231229
Claims (15)
- 1 . A method comprising: detecting a beginning of a movement of a user's gaze by processing gaze-tracking data, collected by a gaze-tracking means; predicting a motion blur in an image to be captured by at least one camera during the movement of the user's gaze, using a portion of the gaze-tracking data corresponding to the beginning of the movement of the user's gaze; and compensating for the predicted motion blur while capturing the image by controlling at least one component of the at least one camera during an exposure time for capturing the image such that motion of the image on an image sensor of the at least one camera is induced in a direction opposite to, and with a magnitude corresponding to, a gaze delta associated with a gaze point of the user.
- 2 . The method of claim 1 , wherein compensating for the predicted motion blur while capturing the image comprises: determining a gaze delta between two preceding consecutive images that were captured by the at least one camera previously, based on the gaze-tracking data and an image capture frame rate; controlling an image sensor of the at least one camera to capture N sub-images during an exposure time for capturing the image, wherein an offset between any two consecutive sub-images depends on N and the gaze delta; and combining the N sub-images for generating the image, wherein the image has a combined gaze-based offset with respect to its preceding image.
- 3 . The method of claim 1 , wherein compensating for the motion blur while capturing the image comprises: determining a gaze delta between two preceding consecutive images captured by the at least one camera, based on the gaze-tracking data and an image capture frame rate; and controlling at least one actuator to change a pose of one of: the at least one camera, an image sensor of the at least one camera, or a lens of the at least one camera, during an exposure time for capturing the image, in a continuous manner such that a movement of the image on the image sensor of the at least one camera matches a direction and a magnitude of the gaze delta, wherein the image has a continuous exposure and a constant gaze-based offset with respect to its preceding image.
- 4 . The method of claim 1 , further comprising post-processing the image using at least one image de-blurring algorithm employing deconvolution.
- 5 . The method of claim 1 , wherein the step of predicting the motion blur in the image which is to be captured by the at least one camera during the movement of the user's gaze comprises determining an amount and a direction of the motion blur, based on a shutter speed of the at least one camera and at least one of: a gaze velocity, a gaze acceleration, at the beginning of the movement of the user's gaze.
- 6 . The method of claim 1 , wherein the step of predicting the motion blur in the image which is to be captured by the at least one camera during the movement of the user's gaze comprises: processing head-tracking data, collected by a head-tracking means, for determining at least one of: a head velocity, a head acceleration, at the beginning of the movement of the user's gaze; and determining an amount and a direction of a global motion blur for an entirety of the image, based on a shutter speed of the at least one camera and the at least one of: the head velocity, the head acceleration, at the beginning of the movement of the user's gaze, wherein the motion blur comprises the global motion blur.
- 7 . The method of claim 6 , further comprising: receiving, from at least one depth sensor, a depth map indicative of optical depths of objects in a field of view of the at least one camera; and adjusting the at least one of: the head velocity, the head acceleration, at the beginning of the movement of the user's gaze, based on the depth map, wherein said adjustment is made prior to the step of determining the amount and the direction of the global motion blur for the entirety of the image.
- 8 . The method of claim 5 , further comprising determining the at least one of: the gaze velocity, the gaze acceleration, at the beginning of the movement of the user's gaze, based on an optical flow of at least one moving object that is present in a field of view of the at least one camera and that is to be captured in the image.
- 9 . The method of claim 1 , wherein processing the gaze-tracking data comprises: determining a gaze point in a field of view of the at least one camera; and detecting a change in the gaze point and determine at least one of: a gaze velocity, a gaze acceleration, based on the change in the gaze point; wherein the beginning of the movement of the user's gaze is detected when at least one of the following is true: a magnitude of the at least one of: the gaze velocity, the gaze acceleration, exceeds its corresponding predefined magnitude threshold; a direction of the at least one of: the gaze velocity, the gaze acceleration, exceeds its corresponding predefined angular threshold.
- 10 . A display apparatus comprising: at least one camera; a gaze-tracking means; at least one processor configured to: detect a beginning of a movement of a user's gaze by processing gaze-tracking data, collected by the gaze-tracking means; predict a motion blur in an image to be captured by the at least one camera during the movement of the user's gaze using a portion of the gaze-tracking data corresponding to the beginning of the movement of the user's gaze; and compensate for the predicted motion blur while capturing the image by controlling at least one component of the at least one camera during an exposure time for capturing the image, such that motion of the image on an image sensor of the at least one camera is induced in a direction opposite to, and with a magnitude corresponding to, a gaze delta associated with a gaze point of the user.
- 11 . The display apparatus of claim 10 , wherein when compensating for the predicted motion blur while capturing the image, the at least one processor is configured to: determine a gaze delta between two preceding consecutive images that were captured by the at least one camera previously, based on the gaze-tracking data and an image capture frame rate; control an image sensor of the at least one camera to capture N sub-images during an exposure time for capturing the image, wherein an offset between any two consecutive sub-images depends on N and the gaze delta; and combine the N sub-images for generating the image, wherein the image has a combined gaze-based offset with respect to its preceding image.
- 12 . The display apparatus of claim 10 , wherein when compensating for the motion blur while capturing the image, the at least one processor is configured to: determine a gaze delta between two preceding consecutive images captured by the at least one camera, based on the gaze-tracking data and an image capture frame rate; and control at least one actuator to change a pose of one of: the at least one camera, an image sensor of the at least one camera, or a lens of the at least one camera, during an exposure time for capturing the image, in a continuous manner such that a movement of the image on the image sensor of the at least one camera matches a direction and a magnitude of the gaze delta, wherein the image has a continuous exposure and a constant gaze-based offset with respect to its preceding image.
- 13 . The display apparatus of claim 10 , wherein when predicting the motion blur in the image which is to be captured by the at least one camera during the movement of the user's gaze, the at least one processor is configured to determine an amount and a direction of the motion blur, based on a shutter speed of the at least one camera and at least one of: a gaze velocity, a gaze acceleration, at the beginning of the movement of the user's gaze.
- 14 . The display apparatus of claim 10 , further comprising a head-tracking means, wherein when predicting the motion blur in the image which is to be captured by the at least one camera during the movement of the user's gaze, the at least one processor is configured to: process head-tracking data, collected by the head-tracking means, to determine at least one of: a head velocity, a head acceleration, at the beginning of the movement of the user's gaze; and determine an amount and a direction of a global motion blur for an entirety of the image, based on a shutter speed of the at least one camera and the at least one of: the head velocity, the head acceleration, at the beginning of the movement of the user's gaze, wherein the motion blur comprises the global motion blur.
- 15 . The display apparatus of claim 14 , further comprising at least one depth sensor, wherein the at least one processor is further configured to: receive, from the at least one depth sensor, a depth map indicative of optical depths of objects in a field of view of the at least one camera; and adjust the at least one of: the head velocity, the head acceleration, at the beginning of the movement of the user's gaze, based on the depth map, wherein said adjustment is made prior to the step of determining the amount and the direction of the global motion blur for the entirety of the image.
Description
TECHNICAL FIELD The present disclosure relates to methods incorporating gaze-based motion blur compensation. Moreover, the present disclosure also relates to display apparatuses incorporating gaze-based motion blur compensation. BACKGROUND Nowadays, mixed-reality (MR) devices are increasingly being used for entertainment, training and simulation, engineering, healthcare, and the like. Such MR devices present interactive MR environments to users, and the users often interact in such MR environments by moving their gaze to look at different objects, following moving objects, moving their heads to change their viewing perspectives, selecting objects, manipulating objects, and the like. For enabling presentation of the MR environments, cameras arranged on the MR devices typically capture gaze-contingent images of a real-world environment in which the MR devices are being used. However, images captured during a movement of a user's gaze (and additionally, of a user's head) are quite blurry, meaning that visual details (such as text, designs, textures, and the like) in such images, are incomprehensible. In other words, such images have presence of motion blur, which is undesirable and lowers the user's visual experience. Presently, some techniques are being employed to minimize presence of the motion blur in such images. As an example, short exposure times are used when capturing images, but in such cases, the images become very noisy (due to reduced amounts of light being captured during the short exposure times), and noise corrupts visual features in the images. Noisy images are detrimental to the user's visual experience. As another example, the images could be post-processed by employing optical flow estimation to reduce motion blur. However, optical flows have inherent uncertainty in them and may mis-estimate motion, thereby causing insufficient motion blur reduction, or worse, may introduce even larger motion blur in the images due to overcompensation. Moreover, removing motion blur as a post-processing step involves deconvolution, but motion blur is not an invertible convolution operation. So, motion blur removal via deconvolution is approximate at best, and is not effective enough. Therefore, in light of the foregoing discussion, there exists a need to overcome the aforementioned drawbacks. SUMMARY The aim of the present disclosure is to provide a method and a display apparatus to capture images while accurately and efficiently compensating for a motion blur which is likely to be introduced in said images due to movement of a user's gaze during image capturing. The aim of the present disclosure is achieved by a method and a display apparatus which incorporate gaze-tracking and estimation of motion blur prior to image capturing, for capturing blur-compensated images, as defined in the appended independent claims to which reference is made to. Advantageous features are set out in the appended dependent claims. Throughout the description and claims of this specification, the words “comprise”, “include”, “have”, and “contain” and variations of these words, for example “comprising” and “comprises”, mean “including but not limited to”, and do not exclude other components, items, integers or steps not explicitly disclosed also to be present. Moreover, the singular encompasses the plural unless the context otherwise requires. In particular, where the indefinite article is used, the specification is to be understood as contemplating plurality as well as singularity, unless the context requires otherwise. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1A is a schematic illustration of a movement of a user's gaze, when a display apparatus is in use, FIG. 1B illustrates a motion blur that is predicted in an image, while FIG. 1C illustrates the image that is captured while compensating for the predicted motion blur of FIG. 1B, in accordance with an embodiment of the present disclosure; FIG. 2 illustrates an exemplary manner of compensating for a predicted motion blur when capturing an image, in accordance with an embodiment of the present disclosure; FIG. 3 illustrates another exemplary manner of compensating for a predicted motion blur when capturing an image, in accordance with a different embodiment of the present disclosure; FIG. 4 illustrates steps of a method incorporating gaze-based motion blur compensation, in accordance with an embodiment of the present disclosure; and FIG. 5 illustrates a block diagram of an architecture of a display apparatus incorporating gaze-based motion blur compensation, in accordance with an embodiment of the present disclosure. DETAILED DESCRIPTION OF EMBODIMENTS The following detailed description illustrates embodiments of the present disclosure and ways in which they can be implemented. Although some modes of carrying out the present disclosure have been disclosed, those skilled in the art would recognize that other embodiments for carrying out or practising the present disclosure are also