US-12620072-B2 - Distortion combination and correction for final views in video see-through (VST) augmented reality (AR)
Abstract
A method includes obtaining an image captured by a see-through camera of a video see-through (VST) augmented reality (AR) device. The method also includes identifying a first distortion created by at least one see-through camera lens and identifying a second distortion created by at least one display lens of the VST AR device. The method further includes determining a combined distortion based on the first distortion and the second distortion. The method also includes pre-warping the image to offset the combined distortion such that the image is not distorted when the pre-warped image is viewed by a user through the at least one display lens of the VST AR device. In addition, the method includes presenting the pre-warped image to the user on at least one display of the VST AR device.
Inventors
- Yingen Xiong
- Christopher A. Peri
Assignees
- SAMSUNG ELECTRONICS CO., LTD.
Dates
- Publication Date
- 20260505
- Application Date
- 20230727
Claims (20)
- 1 . A method comprising: obtaining an image captured by a see-through camera of a video see-through (VST) augmented reality (AR) device; identifying a first distortion created by at least one see-through camera lens; identifying a second distortion created by at least one display lens of the VST AR device; determining a combined distortion based on the first distortion and the second distortion; pre-warping the image to offset the combined distortion; and presenting the pre-warped image to a user on at least one display of the VST AR device; wherein pre-warping the image comprises: retrieving a specified distortion mesh configured to compensate for the combined distortion, wherein the specified distortion mesh is associated with a first color channel; applying the specified distortion mesh to pre-warp the first color channel of the image; accessing a first specified distortion difference between the first color channel and a second color channel; and applying the specified distortion mesh and the first specified distortion difference to pre-warp the second color channel of the image.
- 2 . The method of claim 1 , wherein pre-warping the image further comprises: accessing a second specified distortion difference between the first color channel and a third color channel; and applying the specified distortion mesh and the second specified distortion difference to pre-warp the third color channel of the image.
- 3 . The method of claim 1 , wherein the first specified distortion difference is passed to a shader instead of a distortion mesh for the second color channel.
- 4 . The method of claim 1 , wherein the combined distortion is based on one or more intrinsic properties of the at least one see-through camera lens.
- 5 . The method of claim 1 , wherein the combined distortion is based on one or more geometric distortion and chromatic aberration properties of the at least one display lens.
- 6 . The method of claim 1 , further comprising: performing a camera calibration operation to determine intrinsic and extrinsic parameters of the at least one see-through camera lens, the first distortion based on the intrinsic and extrinsic parameters of the at least one see-through camera lens; and performing a display lens calibration operation to determine intrinsic and extrinsic parameters of the at least one display lens, the second distortion based on the intrinsic and extrinsic parameters of the at least one display lens.
- 7 . The method of claim 6 , wherein: performing the camera calibration operation comprises generating at least one camera matrix and at least one distortion model; and performing the display lens calibration operation comprises generating reversed display lens distortions.
- 8 . A video see-through (VST) augmented reality (AR) device comprising: at least one display comprising at least one display lens; at least one see-through camera comprising at least one see-through camera lens and configured to capture an image; and at least one processing device configured to: obtain the image captured by the at least one see-through camera; identify a first distortion created by the at least one see-through camera lens; identify a second distortion created by the at least one display lens; determine a combined distortion based on the first distortion and the second distortion; pre-warp the image to offset the combined distortion; and present the pre-warped image to a user on the at least one display; wherein, to pre-warp the image, the at least one processing device is configured to: retrieve a specified distortion mesh configured to compensate for the combined distortion, wherein the specified distortion mesh is associated with a first color channel; apply the specified distortion mesh to pre-warp the first color channel of the image; access a first specified distortion difference between the first color channel and a second color channel; and apply the specified distortion mesh and the first specified distortion difference to pre-warp the second color channel of the image.
- 9 . The VST AR device of claim 8 , wherein, to pre-warp the image, the at least one processing device is further configured to: access a second specified distortion difference between the first color channel and a third color channel; and apply the specified distortion mesh and the second specified distortion difference to pre-warp the third color channel of the image.
- 10 . The VST AR device of claim 8 , wherein the at least one processing device is configured to pass the first specified distortion difference to a shader instead of a distortion mesh for the second color channel.
- 11 . The VST AR device of claim 8 , wherein the combined distortion is based on one or more intrinsic properties of the at least one see-through camera lens.
- 12 . The VST AR device of claim 8 , wherein the combined distortion is based on one or more geometric distortion and chromatic aberration properties of the at least one display lens.
- 13 . The VST AR device of claim 8 , wherein the at least one processing device is further configured to: perform a camera calibration operation to determine intrinsic and extrinsic parameters of the at least one see-through camera lens, the first distortion based on the intrinsic and extrinsic parameters of the at least one see-through camera lens; and perform a display lens calibration operation to determine intrinsic and extrinsic parameters of the at least one display lens, the second distortion based on the intrinsic and extrinsic parameters of the at least one display lens.
- 14 . The VST AR device of claim 13 , wherein: to perform the camera calibration operation, the at least one processing device is configured to generate at least one camera matrix and at least one distortion model; and to perform the display lens calibration operation, the at least one processing device is configured to generate reversed display lens distortions.
- 15 . A non-transitory machine readable medium containing instructions that when executed cause at least one processor to: obtain an image captured by a see-through camera of a video see-through (VST) augmented reality (AR) device; identify a first distortion created by at least one see-through camera lens; identify a second distortion created by at least one display lens of the VST AR device; determine a combined distortion based on the first distortion and the second distortion; pre-warp the image to offset the combined distortion; and present the pre-warped image to a user on at least one display of the VST AR device; wherein the instructions that when executed cause the at least one processor to pre-warp the image comprise instructions that when executed cause at least one processor to: retrieve a specified distortion mesh configured to compensate for the combined distortion, wherein the specified distortion mesh is associated with a first color channel; apply the specified distortion mesh to pre-warp the first color channel of the image; access a first specified distortion difference between the first color channel and a second color channel; and apply the specified distortion mesh and the first specified distortion difference to pre-warp the second color channel of the image.
- 16 . The non-transitory machine readable medium of claim 15 , wherein the instructions that when executed cause at least one processor to pre-warp the image further comprise instructions that when executed cause at least one processor to: access a second specified distortion difference between the first color channel and a third color channel; and apply the specified distortion mesh and the second specified distortion difference to pre-warp the third color channel of the image.
- 17 . The non-transitory machine readable medium of claim 15 , wherein the combined distortion is based on one or more intrinsic properties of the at least one see-through camera lens.
- 18 . The non-transitory machine readable medium of claim 15 , wherein the combined distortion is based on one or more geometric distortion and chromatic aberration properties of the at least one display lens.
- 19 . The non-transitory machine readable medium of claim 15 , wherein the instructions when executed cause the at least one processor to pass the first specified distortion difference to a shader instead of a distortion mesh for the second color channel.
- 20 . The non-transitory machine readable medium of claim 15 , further containing instructions that when executed cause the at least one processor to: perform a camera calibration operation to determine intrinsic and extrinsic parameters of the at least one see-through camera lens, the first distortion based on the intrinsic and extrinsic parameters of the at least one see-through camera lens; and perform a display lens calibration operation to determine intrinsic and extrinsic parameters of the at least one display lens, the second distortion based on the intrinsic and extrinsic parameters of the at least one display lens.
Description
CROSS-REFERENCE TO RELATED APPLICATION AND PRIORITY CLAIM This application claims priority under 35 U.S.C. § 119(e) to U.S. Provisional Patent Application No. 63/437,809 filed on Jan. 9, 2023, which is hereby incorporated by reference in its entirety. TECHNICAL FIELD This disclosure relates generally to augmented reality (AR) systems and processes. More specifically, this disclosure relates to distortion combination and correction for final views in video see-through (VST) AR. BACKGROUND Video see-through (VST) augmented reality (AR) systems can access an environment around a user utilizing see-through cameras installed on a VST headset. A user can see mixed reality information of image frames captured by the see-through cameras and virtual objects generated by a graphics pipeline through at least one display lens. In a VST AR pipeline, a see-through camera lens or lenses can create distortions of captured image frames, and the display lens or lenses can create distortions while the user sees a view rendered on the display through the display lens(es). SUMMARY This disclosure provides distortion combination and correction for final views in video see-through (VST) augmented reality (AR). In a first embodiment, a method includes obtaining an image captured by a see-through camera of a VST AR device. The method also includes identifying a first distortion created by at least one see-through camera lens and identifying a second distortion created by at least one display lens of the VST AR device. The method further includes determining a combined distortion based on the first distortion and the second distortion. The method also includes pre-warping the image to offset the combined distortion such that the image is not distorted when the pre-warped image is viewed by a user through the at least one display lens of the VST AR device. In addition, the method includes presenting the pre-warped image to the user on at least one display of the VST AR device. In a second embodiment, a VST AR device includes at least one display, at least one see-through camera, and at least one processing device. The at least one display includes at least one display lens. The at least one see-through camera includes at least one see-through camera lens and is configured to capture an image. The at least one processing device is configured to obtain the image captured by the at least one see-through camera, identify a first distortion created by the at least one see-through camera lens, and identify a second distortion created by the at least one display lens. The at least one processing device is also configured to determine a combined distortion based on the first distortion and the second distortion. The at least one processing device is further configured to pre-warp the image to offset the combined distortion such that the image is not distorted when the pre-warped image is viewed by a user through the at least one display lens. In addition, the at least one processing device is configured to present the pre-warped image to the user on the at least one display. In a third embodiment, a non-transitory machine readable medium contains instructions that when executed cause at least one processor to obtain an image captured by a see-through camera of a VST AR device. The non-transitory machine readable medium also contains instructions that when executed cause the at least one processor to identify a first distortion created by at least one see-through camera lens and identify a second distortion created by at least one display lens of the VST AR device. The non-transitory machine readable medium further contains instructions that when executed cause the at least one processor to determine a combined distortion based on the first distortion and the second distortion. The non-transitory machine readable medium also contains instructions that when executed also cause the at least one processor to pre-warp the image to offset the combined distortion such that the image is not distorted when the pre-warped image is viewed by a user through the at least one display lens of the VST AR device. In addition, the non-transitory machine readable medium contains instructions that when executed cause the at least one processor to present the pre-warped image to the user on at least one display of the VST AR device. Other technical features may be readily apparent to one skilled in the art from the following figures, descriptions, and claims. Before undertaking the DETAILED DESCRIPTION below, it may be advantageous to set forth definitions of certain words and phrases used throughout this patent document. The terms “transmit,” “receive,” and “communicate,” as well as derivatives thereof, encompass both direct and indirect communication. The terms “include” and “comprise,” as well as derivatives thereof, mean inclusion without limitation. The term “or” is inclusive, meaning and/or. The phrase “associated with,” as well as derivatives thereof, mea