Search

US-12621427-B2 - Methods and systems for visual field testing using dynamic light point grids in virtual reality

US12621427B2US 12621427 B2US12621427 B2US 12621427B2US-12621427-B2

Abstract

A virtual eye test can be performed for visual field testing using a dynamic grid of light points in a virtual reality (VR) environment. The test can be conducted using an electronic device with a head-mounted display (HMD) and a camera. The device can generate and render a VR user interface in a three-dimensional virtual environment, simulating test scenarios with a dynamic grid of light points. The device can continuously track eye movements in response to one or more visual stimuli and analyze the detection and identification of light points to assess visual detection across the visual field.

Inventors

  • Steven Lee
  • Julia ZHEN
  • ChyrSong Ting
  • Matthew James GOLINO
  • Justin Paul DEMPSEY
  • Jeffrey Joseph FILLINGHAM

Assignees

  • Zenni Optical, Inc.

Dates

Publication Date
20260505
Application Date
20240829

Claims (20)

  1. 1 . A method of implementing a virtual eye test for visual field testing, comprising: at an electronic device including a head-mounted display and a camera: generating a virtual reality (VR) user interface corresponding to a three-dimensional virtual environment; rendering the VR user interface on the HMD; simulating one or more test scenarios with a dynamic grid of light points in the VR user interface; and while simulating the one or more test scenarios, in real time: continuously tracking, using the camera, eye movements in response to one or more visual stimuli presented in the one or more test scenarios; and analyzing detection and identification of light points for assessing visual detection across a visual field, based on the eye movements; wherein simulating the one or more test scenarios comprises calibrating using a control group of users with predetermined visual field profiles to establish baseline performance metrics and validating accuracy of visual field assessment, prior to assessing the visual detection across the visual field.
  2. 2 . The method of claim 1 , wherein the one or more visual stimuli comprises light points that appear randomly across a defined grid.
  3. 3 . The method of claim 1 , wherein the one or more test scenarios comprise scenarios where light points of different intensities and sizes appear at random locations within a grid.
  4. 4 . The method of claim 1 , wherein the one or more test scenarios comprise one or more scenarios to test different aspects of the visual field, including central and peripheral vision, under a plurality of lighting conditions.
  5. 5 . The method of claim 1 , wherein the one or more test scenarios require identification of light points within milliseconds to a few seconds.
  6. 6 . The method of claim 1 , wherein simulating the one or more test scenarios comprises generating an interactive grid of light points by calibrating to a user's visual field, presenting a dynamic grid, and displaying one or more light points randomly in the dynamic grid.
  7. 7 . The method of claim 1 , wherein the dynamic grid comprises uniform and random grids.
  8. 8 . The method of claim 1 , wherein the one or more test scenarios comprise generating and displaying a plurality of light points having varying intensities and sizes.
  9. 9 . The method of claim 1 , wherein continuously tracking the eye movements comprises continuously monitoring gaze direction and fixation points, while recording responses to each light point.
  10. 10 . The method of claim 1 , wherein continuously tracking the eye movements comprises mapping gaze direction and fixation points via high-precision eye-tracking sensors that capture accuracy and speed of visual detection.
  11. 11 . The method of claim 1 , further comprising continuously tracking a predetermined gesture or pressing a button when detecting a light point and recording reaction time and accuracy, in addition to tracking gaze direction and fixation points.
  12. 12 . The method of claim 1 , wherein assessing visual detection across the visual field comprises analyzing accuracy and speed of visual detection across the visual field, in real time.
  13. 13 . The method of claim 1 , wherein assessing visual detection across the visual field comprises mapping out the visual field, identifying any areas with reduced sensitivity or blind spots.
  14. 14 . The method of claim 1 , wherein assessing visual detection across the visual field comprises mapping out the visual field for glaucoma and retinal detachment.
  15. 15 . The method of claim 1 , wherein the dynamic grid of light points covers a visual field range of up to 180 degrees horizontally and 135 degrees vertically, with light points appearing for durations between 200 milliseconds to 5 seconds.
  16. 16 . The method of claim 1 , wherein the light points have intensities ranging from 10 cd/m 2 to 1000 cd/m 2 and sizes ranging from 0.1° to 2° in visual angle, allowing assessment of sensitivity to both small, dim objects and larger, brighter ones across the visual field.
  17. 17 . The method of claim 1 , further comprising generating a color-coded visual field map in real-time, representing detection accuracy and speed across different areas of the visual field, and comparing the results to baseline metrics for immediate performance assessment.
  18. 18 . A method of implementing a virtual eye test for visual field testing, comprising: at an electronic device including a head-mounted display and a camera: generating a virtual reality (VR) user interface corresponding to a three-dimensional virtual environment; rendering the VR user interface on the HMD; simulating one or more test scenarios with a dynamic grid of light points in the VR user interface, the dynamic grid of light points covering a visual field range of up to 180 degrees horizontally and 135 degrees vertically, with light points appearing for durations between 200 milliseconds to 5 seconds; and while simulating the one or more test scenarios, in real time: continuously tracking, using the camera, eye movements in response to one or more visual stimuli presented in the one or more test scenarios; analyzing detection and identification of light points for assessing visual detection across a visual field, based on the eye movements.
  19. 19 . The method of claim 18 , wherein the light points have intensities ranging from 10 cd/m 2 to 1000 cd/m 2 and sizes ranging from 0.1° to 2° in visual angle, allowing assessment of sensitivity to both small, dim objects and larger, brighter ones across the visual field.
  20. 20 . A method of implementing a virtual eye test for visual field testing, comprising: at an electronic device including a head-mounted display and a camera: generating a virtual reality (VR) user interface corresponding to a three-dimensional virtual environment; rendering the VR user interface on the HMD; simulating one or more test scenarios with a dynamic grid of light points in the VR user interface; and while simulating the one or more test scenarios, in real time: continuously tracking, using the camera, eye movements in response to one or more visual stimuli presented in the one or more test scenarios; analyzing detection and identification of light points for assessing visual detection across a visual field, based on the eye movements; and generating a color-coded visual field map in real-time, representing detection accuracy and speed across different areas of the visual field, and comparing the results to baseline metrics for immediate performance assessment.

Description

TECHNICAL FIELD The present inventions relate to vision test technology. More specifically, methods, systems, devices, and non-statutory computer-readable storage media are applied to implement vision testing in an extended reality environment. BACKGROUND Traditional visual assessment methods have been the cornerstone of evaluating eye health and vision for many years. These methods are typically conducted in clinical environments, where specialized equipment and standardized procedures are used to ensure accurate and reliable results. The parameters for these assessments are generally fixed, reflecting the controlled nature of the clinical setting. Over time, these techniques have become the accepted standard for diagnosing and monitoring visual conditions, forming the basis of routine eye care practices in medical offices, hospitals, and specialized eye care facilities. Despite their widespread use, these methods have traditionally been limited to professional settings, where they can be conducted under the supervision of trained healthcare providers using dedicated equipment. SUMMARY The present disclosure relates to innovative methods and systems that can revolutionize vision care, making vision testing and other exams more accessible and affordable for patients. Additionally, it is contemplated that the principles and features of the present disclosure can be implemented in numerous other applications of display technology, including headsets, heads-up displays, and other micro-displays (e.g., microLED and microOLED) to address challenges and limitations inherent in such products and their uses. In accordance with at least some embodiments disclosed herein is the realization that traditional methods for visual assessment do not allow for dynamic adjustment of test parameters, leading to less accurate assessments, nor can they be implemented to test eyes and vision at home using household devices in a consistent and environment-locked manner. Some embodiments are directed to a method of implementing a virtual vision test at an electronic device including a head-mounted display (HMD) and a camera. The method includes executing a user application configured to enable the virtual vision test; generating a virtual reality (VR) user interface corresponding to a three-dimensional (3D) virtual environment; focusing the camera on an eye area of a user wearing the electronic device; displaying, on the user interface, a visual stimulus corresponding to the virtual vision test; while displaying the visual stimulus, in real time, capturing a sequence of eye images using the camera of the electronic device; determining eye movement information including a temporal sequence of eyeball positions based on the sequence of eye images; and comparing the visual stimulus and the eye movement information to determine an eye health condition. In some embodiments, a user application can be implemented by a head-mounted display configured to create a customized extended reality (XR) environment for a user engaged on an XR information platform. Products may be rendered for the user in a three-dimension format in the XR environment, thereby facilitating eyewear selection and fitting. The XR can be an umbrella term encapsulating Augmented Reality (AR), Virtual Reality (VR), Mixed Reality (MR), and everything in between. In this application, any embodiments that apply a VR system can be implemented using an AR or MR system as well. Some embodiments are directed to a method of implementing a virtual eye test for peripheral vision. The method is performed at an electronic device including a head-mounted display and a camera. The method includes generating a virtual reality (VR) user interface corresponding to a three-dimensional virtual environment. The method also includes rendering the VR user interface on the HMD. The method also includes simulating one or more spatial task scenarios in the VR user interface. The method also includes, while simulating the one or more spatial task scenarios, in real time: continuously tracking, using the camera, gaze direction and peripheral responses in response to one or more visual stimuli presented in the one or more spatial task scenarios; and evaluating the gaze direction and peripheral responses for peripheral vision performance. Some embodiments are directed to a method of implementing a virtual eye test for assessing visual field loss with interactive visual maps. The method is performed at an electronic device including a head-mounted display and a camera. The method includes generating a virtual reality (VR) user interface corresponding to a three-dimensional virtual environment. The method also includes rendering the VR user interface on the HMD. The method also includes simulating one or more interactive visual map scenarios in the VR user interface. The method also includes, while simulating the one or more interactive visual map scenarios, in real time: continuously tracking, using t