US-20260127710-A1 - ADAPTIVE COLOR MATCHING AND ENHANCEMENT OF IMAGE FRAMES IN VIDEO SEE-THROUGH (VST) EXTENDED REALITY (XR) OR OTHER APPLICATIONS
Abstract
An apparatus includes at least one imaging sensor configured to capture multiple image frames of a scene. The apparatus also includes at least one processing device configured to obtain at least two of the captured image frames, generate at least one color correction model based on the at least two captured image frames, and apply the at least one color correction model to one or more of the at least two captured image frames in order to obtain color-matched image frames. The color-matched image frames have colors that are more similar to each other compared to colors of the at least two captured image frames.
Inventors
- Yingen Xiong
Assignees
- SAMSUNG ELECTRONICS CO., LTD.
Dates
- Publication Date
- 20260507
- Application Date
- 20250311
Claims (20)
- 1 . An apparatus comprising: at least one imaging sensor configured to capture multiple image frames of a scene; and at least one processing device configured to: obtain at least two of the captured image frames; generate at least one color correction model based on the at least two captured image frames; and apply the at least one color correction model to one or more of the at least two captured image frames in order to obtain color-matched image frames, the color-matched image frames having colors that are more similar to each other compared to colors of the at least two captured image frames.
- 2 . The apparatus of claim 1 , wherein the at least one processing device is further configured to: determine color histograms and signal-to-noise ratios associated with the at least two captured image frames; and determine whether to apply color correction based on the color histograms and the signal-to-noise ratios.
- 3 . The apparatus of claim 1 , wherein, to generate the at least one color correction model, the at least one processing device is configured to: determine how to adjust luminance values in foveation or overlapping regions of the at least two captured image frames using gamma correction; determine how to adjust chrominance values in the foveation or overlapping regions of the at least two captured image frames using linear correction; and identify parameters of the at least one color correction model based on the determinations.
- 4 . The apparatus of claim 1 , wherein, to apply the at least one color correction model, the at least one processing device is configured to one of: apply the at least one color correction model to foveation regions of the at least two captured image frames, the foveation regions representing areas of the scene on which a user's eyes are focused; or apply the at least one color correction model to overlapping regions of the at least two captured image frames, the overlapping regions including and larger than the foveation regions.
- 5 . The apparatus of claim 1 , wherein, to apply the at least one color correction model, the at least one processing device is configured to: apply the at least one color correction model to pixel data in portions of the at least two captured image frames; and propagate corrections made to the pixel data in the portions of the at least two captured image frames to pixel data in other portions of the at least two captured image frames.
- 6 . The apparatus of claim 1 , wherein the at least two captured image frames comprise at least one of: at least one image frame associated with a user's right eye and at least one image frame associated with a user's left eye; or at least one sequence of image frames.
- 7 . The apparatus of claim 1 , wherein the at least one processing device is further configured to: prior to generation of the at least one color correction model, convert the at least two captured image frames from a first image format that lacks luminance data to a second image format that includes luminance data; and after application of the at least one color correction model, convert the color-matched image frames from the second image format to the first image format or a third image format.
- 8 . The apparatus of claim 1 , wherein the at least one processing device is further configured to: apply a passthrough transformation to each of the color-matched image frames in order to generate transformed image frames; and render the transformed image frames for display.
- 9 . A method comprising: obtaining, using at least one imaging sensor of an electronic device, at least two of multiple captured image frames of a scene; generating, using at least one processing device of the electronic device, at least one color correction model based on the at least two captured image frames; and applying, using the at least one processing device, the at least one color correction model to one or more of the at least two captured image frames in order to obtain color-matched image frames, the color-matched image frames having colors that are more similar to each other compared to colors of the at least two captured image frames.
- 10 . The method of claim 9 , further comprising: determining color histograms and signal-to-noise ratios associated with the at least two captured image frames; and determining whether to apply color correction based on the color histograms and the signal-to-noise ratios.
- 11 . The method of claim 9 , wherein generating the at least one color correction model comprises: determining how to adjust luminance values in foveation or overlapping regions of the at least two captured image frames using gamma correction; determining how to adjust chrominance values in the foveation or overlapping regions of the at least two captured image frames using linear correction; and identifying parameters of the at least one color correction model based on the determinations.
- 12 . The method of claim 9 , wherein applying the at least one color correction model comprises one of: applying the at least one color correction model to foveation regions of the at least two captured image frames, the foveation regions representing areas of the scene on which a user's eyes are focused; or applying the at least one color correction model to overlapping regions of the at least two captured image frames, the overlapping regions including and larger than the foveation regions.
- 13 . The method of claim 9 , wherein applying the at least one color correction model comprises: applying the at least one color correction model to pixel data in portions of the at least two captured image frames; and propagating corrections made to the pixel data in the portions of the at least two captured image frames to pixel data in other portions of the at least two captured image frames.
- 14 . The method of claim 9 , wherein the at least two captured image frames comprise at least one of: at least one image frame associated with a user's right eye and at least one image frame associated with a user's left eye; or at least one sequence of image frames.
- 15 . The method of claim 9 , further comprising: prior to generation of the at least one color correction model, converting the at least two captured image frames from a first image format that lacks luminance data to a second image format that includes luminance data; and after application of the at least one color correction model, converting the color-matched image frames from the second image format to the first image format or a third image format.
- 16 . The method of claim 9 , further comprising: applying a passthrough transformation to each of the color-matched image frames in order to generate transformed image frames; and rendering the transformed image frames for display.
- 17 . A non-transitory machine readable medium containing instructions that when executed cause at least one processor of an electronic device to: obtain, using at least one imaging sensor of the electronic device, at least two of multiple captured image frames of a scene; generate at least one color correction model based on the at least two captured image frames; and apply the at least one color correction model to one or more of the at least two captured image frames in order to obtain color-matched image frames, the color-matched image frames having colors that are more similar to each other compared to colors of the at least two captured image frames.
- 18 . The non-transitory machine readable medium of claim 17 , further containing instructions that when executed cause the at least one processor to: determine color histograms and signal-to-noise ratios associated with the at least two captured image frames; and determine whether to apply color correction based on the color histograms and the signal-to-noise ratios.
- 19 . The non-transitory machine readable medium of claim 17 , wherein the instructions that when executed cause the at least one processor to generate the at least one color correction model comprise: instructions that when executed cause the at least one processor to: determine how to adjust luminance values in foveation or overlapping regions of the at least two captured image frames using gamma correction; determine how to adjust chrominance values in the foveation or overlapping regions of the at least two captured image frames using linear correction; and identify parameters of the at least one color correction model based on the determinations.
- 20 . The non-transitory machine readable medium of claim 17 , wherein the instructions that when executed cause the at least one processor to apply the at least one color correction model comprise: instructions that when executed cause the at least one processor to: apply the at least one color correction model to pixel data in portions of the at least two captured image frames; and propagate corrections made to the pixel data in the portions of the at least two captured image frames to pixel data in other portions of the at least two captured image frames.
Description
CROSS-REFERENCE TO RELATED APPLICATION AND PRIORITY CLAIM This application claims priority under 35 U.S.C. § 119(e) to U.S. Provisional Patent Application No. 63/716,072 filed on Nov. 4, 2024. This provisional patent application is hereby incorporated by reference in its entirety. TECHNICAL FIELD This disclosure relates generally to image processing systems. More specifically, this disclosure relates to adaptive color matching and enhancement of image frames in video see-through (VST) extended reality (XR) or other applications. BACKGROUND Extended reality (XR) systems are becoming more and more popular over time, and numerous applications have been and are being developed for XR systems. Some XR systems (such as augmented reality or “AR” systems and mixed reality or “MR” systems) can enhance a user's view of his or her current environment by overlaying digital content (such as information or virtual objects) over the user's view of the current environment. For example, some XR systems can often seamlessly blend virtual objects generated by computer graphics with real-world scenes. SUMMARY This disclosure relates to adaptive color matching and enhancement of image frames in video see-through (VST) extended reality (XR) or other applications. In a first embodiment, an apparatus includes at least one imaging sensor configured to capture multiple image frames of a scene. The apparatus also includes at least one processing device configured to obtain at least two of the captured image frames, generate at least one color correction model based on the at least two captured image frames, and apply the at least one color correction model to one or more of the at least two captured image frames in order to obtain color-matched image frames. The color-matched image frames have colors that are more similar to each other compared to colors of the at least two captured image frames. In a second embodiment, a method includes obtaining, using at least one imaging sensor of an electronic device, at least two of multiple captured image frames of a scene. The method also includes generating, using at least one processing device of the electronic device, at least one color correction model based on the at least two captured image frames. The method further includes applying, using the at least one processing device, the at least one color correction model to one or more of the at least two captured image frames in order to obtain color-matched image frames. The color-matched image frames have colors that are more similar to each other compared to colors of the at least two captured image frames. In a third embodiment, a non-transitory machine readable medium contains instructions that when executed cause at least one processor of an electronic device to obtain, using at least one imaging sensor of the electronic device, at least two of multiple captured image frames of a scene. The non-transitory machine readable medium also contains instructions that when executed cause the at least one processor to generate at least one color correction model based on the at least two captured image frames. The non-transitory machine readable medium further contains instructions that when executed cause the at least one processor to apply the at least one color correction model to one or more of the at least two captured image frames in order to obtain color-matched image frames. The color-matched image frames have colors that are more similar to each other compared to colors of the at least two captured image frames. Any one or any combination of the following features may be used with the first, second, or third embodiment. Color histograms and signal-to-noise ratios associated with the at least two captured image frames may be determined, and a determination whether to apply color correction may be based on the color histograms and the signal-to-noise ratios. The at least one color correction model may be generated by determining how to adjust luminance values in foveation or overlapping regions of the at least two captured image frames using gamma correction, determining how to adjust chrominance values in the foveation or overlapping regions of the at least two captured image frames using linear correction, and identifying parameters of the at least one color correction model based on the determinations. The at least one color correction model may be applied by one of: (i) applying the at least one color correction model to foveation regions of the at least two captured image frames (where the foveation regions represent areas of the scene on which a user's eyes are focused) or (ii) applying the at least one color correction model to overlapping regions of the at least two captured image frames (where the overlapping regions include and are larger than the foveation regions). The at least one color correction model may be applied by (i) applying the at least one color correction model to pixel data in portions of the at least two captu