Search

US-12626343-B2 - Enhanced detection or repair of unwanted reflection artifacts in digital images

US12626343B2US 12626343 B2US12626343 B2US 12626343B2US-12626343-B2

Abstract

Devices, methods, and non-transitory computer readable media are disclosed herein to repair or mitigate the appearance of unwanted reflection artifacts in captured video image streams. These unwanted reflection artifacts often present themselves as brightly-colored spots that reflect the shape of a bright light source in the captured image. These artifacts, also referred to herein as “green ghosts” (due to often having a greenish tint), are typically located in regions of the captured images where there is not actually a bright light source located in the image. In fact, such unwanted reflection artifacts often present themselves on the image sensor across the principal point of the lens from where the actual bright light source in the captured image is located. Such devices, methods and computer readable media may be configured to detect, track, and repair such unwanted reflection artifacts in an intelligent fashion, e.g., leveraging images captured with varying exposure settings.

Inventors

  • Brian P. McCall
  • Todd G. Bell
  • Andrew K. McMahon

Assignees

  • APPLE INC.

Dates

Publication Date
20260512
Application Date
20230928

Claims (20)

  1. 1 . A method of repairing unwanted reflections in digital images, comprising: obtaining a first image captured by a first image capture device of an electronic device, wherein the first image is captured at a first exposure level; obtaining a second image captured by the first image capture device, wherein the second image is captured at a second exposure level, and wherein the second exposure level is underexposed relative to the first exposure level; detecting, in the second image, an estimated location of one or more light sources, wherein each of the one or more light sources meets a set of one or more light source criteria; for each of the one or more light sources detected in the second image: determining a candidate location in the first image of an unwanted reflection of the respective light source detected in the second image, wherein the candidate location is determined based, at least in part, on the estimated location of the respective light source detected in the second image; and repairing at least one unwanted reflection in the first image by modifying pixels in the first image at the determined candidate location of the respective at least one unwanted reflection.
  2. 2 . The method of claim 1 , wherein determining a candidate location in the first image of an unwanted reflection of the respective light source detected in the second image further comprises: mirroring the estimated location of the respective light source detected in the second image across a principal point of a lens of the first image capture device.
  3. 3 . The method of claim 1 , wherein the first exposure level comprises an EV0 exposure level and the second exposure level comprises an EV-exposure level.
  4. 4 . The method of claim 3 , wherein the second exposure level is in the range of EV-10 to EV-5.
  5. 5 . The method of claim 1 , wherein the first image and the second image are captured concurrently on a first image sensor of the first image capture device by using interleaved exposure levels across the first image sensor.
  6. 6 . The method of claim 1 , wherein the second image is captured in immediate succession after the first image.
  7. 7 . The method of claim 1 , wherein determining a candidate location in the first image of an unwanted reflection of a respective light source detected in the second image further comprises: determining one or more characteristics of the respective light source detected in the second image; and identifying the one or more characteristics of the respective light source at the candidate location in the first image.
  8. 8 . The method of claim 1 , wherein determining a candidate location in the first image of an unwanted reflection of a respective light source detected in the second image further comprises: evaluating an amount of contrast enhancement that has been applied to the candidate location in the first image.
  9. 9 . The method of claim 1 , wherein repairing a first one of the at least one unwanted reflection in the first image comprises: modifying color values of one or more pixels in the first image at the determined candidate location of the first unwanted reflection.
  10. 10 . The method of claim 1 , wherein repairing a first one of the at least one unwanted reflection in the first image comprises: using a trained neural network to determine modifications to color values of one or more pixels in the first image at the determined candidate location of the first unwanted reflection.
  11. 11 . The method of claim 1 , further comprising: detecting a presence of light flicker in the second image.
  12. 12 . The method of claim 11 , wherein, in response to detecting a presence of light flicker in the second image, the method further comprises: determining a candidate location in the first image of an unwanted reflection of a light source based, at least in part, on a saturation level of pixels in the first image.
  13. 13 . The method of claim 11 , wherein, in response to detecting a presence of light flicker in the second image, the method further comprises: obtaining a third image captured by the first image capture device, wherein at least a portion of the third image is captured at a third exposure level, and wherein the third exposure level is underexposed relative to the first exposure level and overexposed relative to the second exposure level; detecting, in the third image, an estimated location of one or more light sources, wherein each of the one or more light sources meets a set of one or more light source criteria; and for each of the one or more light sources detected in the third image: determining a candidate location in the first image of an unwanted reflection of the respective light source detected in the third image, wherein the candidate location is determined based, at least in part, on the estimated location of the respective light source detected in the third image.
  14. 14 . The method of claim 11 , wherein, in response to detecting a presence of light flicker in the second image, the method further comprises: obtaining a third image captured by the first image capture device, wherein the third image is captured at the second exposure level, and wherein the first image capture device introduces a random amount of jitter into a timing of the capture of the third image; detecting, in the third image, an estimated location of one or more light sources, wherein each of the one or more light sources meets a set of one or more light source criteria; and for each of the one or more light sources detected in the third image: determining a candidate location in the first image of an unwanted reflection of the respective light source detected in the third image, wherein the candidate location is determined based, at least in part, on the estimated location of the respective light source detected in the third image.
  15. 15 . The method of claim 11 , wherein detecting the presence of light flicker in the second image comprises at least one of the following: utilizing a flicker detector that is communicatively coupled to the first image capture device; performing a luminance analysis on the second image; performing a luminance analysis on the first image relative to the second image; or performing a luminance analysis on one or more images captured prior to the first image.
  16. 16 . A method of repairing unwanted reflections in digital images, comprising: obtaining a first image captured by a first image capture device of an electronic device, wherein the first image is captured at a first exposure level and has a first field of view (FOV); obtaining a second image captured by a second image capture device, wherein the second image is captured at a second exposure level and has a second FOV, wherein the second exposure level is underexposed relative to the first exposure level, and wherein the second FOV contains the first FOV; detecting, in the second image, an estimated location of one or more light sources, wherein each of the one or more light sources meets a set of one or more light source criteria; for each of the one or more light sources detected in the second image: determining a candidate location in the first image of an unwanted reflection of the respective light source detected in the second image, wherein the candidate location is determined based, at least in part, on the estimated location of the respective light source detected in the second image; and repairing at least one unwanted reflection in the first image by modifying pixels in the first image at the determined candidate location of the respective at least one unwanted reflection.
  17. 17 . The method of claim 16 , wherein the first image and the second image are captured at a same moment in time.
  18. 18 . The method of claim 16 , wherein determining a candidate location in the first image of an unwanted reflection of the respective light source detected in the second image further comprises: mirroring the estimated location of the respective light source detected in the second image across a principal point of a lens of the second image capture device; and mapping the mirrored estimated location of the respective light source detected in the second image into the first FOV.
  19. 19 . The method of claim 16 , wherein the first exposure level comprises an EV0 exposure level.
  20. 20 . The method of claim 19 , wherein the second exposure level comprises an EV-exposure level.

Description

TECHNICAL FIELD This disclosure relates generally to the field of digital image processing. More particularly, but not by way of limitation, it relates to techniques for efficiently mitigating the appearance of unwanted reflection artifacts in video image streams. BACKGROUND The advent of portable integrated computing devices has caused a wide proliferation of cameras and other video capture-capable devices. These integrated computing devices commonly take the form of smartphones, tablets, or laptop computers, and typically include general purpose computers, cameras, sophisticated user interfaces including touch-sensitive screens, and wireless communications abilities through Wi-Fi, Bluetooth, LTE, HSDPA, New Radio (NR), and other cellular-based or wireless technologies. The wide proliferation of these integrated devices provides opportunities to use the devices' capabilities to perform tasks that would otherwise require dedicated hardware and software. For example, integrated computing devices, such as smartphones, tablets, and laptop computers typically have one or more embedded cameras. These cameras generally amount to lens/camera hardware modules that may be controlled through the use of a general-purpose computer using firmware and/or software (e.g., “Apps”) and a user interface, including touch-screen buttons, fixed buttons, and/or touchless controls, such as voice control. The integration of high-quality cameras into these integrated communication devices, such as smartphones, tablets, and laptop computers, has enabled users to capture and share images and videos in ways never before possible. It is now common for users' smartphones to be their primary image and video capture device of choice. Cameras that are optimized for inclusion into integrated computing devices, and, particularly, into small or portable integrated computing devices, such as smartphones or tablets, may often face various constraints, e.g., processing power constraints, thermal constraints—and even physical size constraints—that cause manufacturers to make tradeoffs between using cameras with optimal image capture capabilities and those that will meet the constraints of the computing devices into which they are being integrated. In particular, unwanted artifacts may often appear in digital images captured by such integrated camera devices, e.g., due to the optics of the lenses used, sensor characteristics, and/or the aforementioned constraints faced by integrated image capture devices. One type of artifact that will be discussed in greater detail herein is referred to as an unwanted reflection artifact. These unwanted reflection artifacts often present themselves as brightly-colored spots, circles, rings, or halos that reflect the shape of a bright light source in the captured image. These artifacts, also referred to herein as “ghosts” or “green ghosts” (due to often having a greenish tint), are typically located in regions of the captured images where there is not actually a bright light source located in the image. In fact, such unwanted reflection artifacts often present themselves on the image sensor at a location mirrored across the principal point of the lens from where the actual bright light source in the captured image is located. Moreover, the position of these artifacts can change rapidly and unexpectedly during the capture of a video image stream, e.g., due to user hand shake, intentional movement of the camera to capture different parts of a scene over time, changes to the camera's focus and/or zoom levels, and the like. Thus, in order to repair or mitigate the appearance of these unwanted reflection artifacts in captured video image streams, it would be desirable to have methods and systems that detect, track, and repair such unwanted reflection artifacts in an intelligent and efficient fashion, e.g., using one or more cameras, whose exposure levels are manipulated in order to more efficiently and accurately detect unwanted reflection artifacts. SUMMARY Devices, methods, and non-transitory computer readable media are disclosed herein to repair or mitigate the appearance of unwanted reflection artifacts (also referred to herein as “ghosts” or “green ghosts”) in captured video image streams. Such devices, methods and computer readable media may be configured to detect, track, and repair such unwanted reflection artifacts in an intelligent (e.g., machine learning-enabled) and efficient fashion. In particular, according to some embodiments, rather than obtaining saturated images of the light sources producing the so-called green ghosts, as is typically the case in today's camera devices, the light sources that produce green ghost in digital images and videos may intentionally be captured with exposure settings that render images of such light sources more clearly, thereby providing algorithms that detect green ghosts in digital images with better information about the sources of light that would produce such green ghos