US-12619296-B2 - Sensor emulation
Abstract
Various implementations disclosed herein include devices, systems, and methods that are capable of executing an application on a head-mounted device (HMD) having a first image sensor in a first image sensor configuration. In some implementations, the application is configured for execution on a device including a second image sensor in a second image sensor configuration different than the first image sensor configuration. In some implementations, a request is received from the executing application for image data from the second image sensor. Responsive to the request at the HMD, a pose of a virtual image sensor is determined, image data is generated based on the pose of the virtual image sensor, and the generated image data is provided to the executing application.
Inventors
- Jeffrey S. Norris
- Bruno M. Sommer
- Olivier Gutknecht
Assignees
- APPLE INC.
Dates
- Publication Date
- 20260505
- Application Date
- 20230322
Claims (20)
- 1 . A method comprising: at a processor: executing an application on a device having a first image sensor in a first image sensor configuration, the application configured for execution on a device comprising a second image sensor in a second image sensor configuration different than the first image sensor configuration; receiving a request from the executing application for image data from the second image sensor in the second image sensor configuration; and responsive to the request, determining a pose of a virtual image sensor; generating image data based on the pose of the virtual image sensor and image data from the first image sensor; and providing the generated image data to the executing application.
- 2 . The method of claim 1 , wherein generating image data comprises: modifying the image data from the first image sensor based on the pose of the virtual image sensor to provide the generated image data.
- 3 . The method of claim 2 , wherein modifying the obtained image data comprises performing point of view correction based on a pose of the first image sensor and the pose of the virtual image sensor.
- 4 . The method of claim 1 , wherein the generated image data simulates optical properties of the second image sensor.
- 5 . The method of claim 1 , wherein generating image data comprises generating an avatar based on the image data from the first image sensor.
- 6 . The method of claim 5 , further comprising sizing the avatar based on a size of a physical environment in which the device having the first image sensor is operating.
- 7 . The method of claim 1 , wherein the first image sensor comprises an inward facing image sensor or a downward facing image sensor.
- 8 . The method of claim 1 , further comprising: presenting user selectable input controls in a 3D representation of the second image sensor in the second image sensor configuration.
- 9 . The method of claim 1 , further comprising: presenting an operable 3D representation of an electronic device comprising the second image sensor in the second image sensor configuration.
- 10 . The method of claim 9 , further comprising: generating a preview image of the generated image data near the 3D representation of the electronic device.
- 11 . The method of claim 9 , further comprising: generating a preview image of the generated image data on the 3D representation of the electronic device.
- 12 . The method of claim 1 , wherein the second image sensor comprises a front-facing image sensor or a rear-facing image sensor.
- 13 . The method of claim 1 , wherein the second image sensor comprises a front-facing image sensor, and wherein generating image data comprises generating an avatar based on the image data from the first image sensor.
- 14 . The method of claim 1 , wherein the second image sensor comprises a rear-facing image sensor, and wherein generating image data comprises modifying the image data from the first image sensor based on the pose of the virtual image sensor to provide the generated image data.
- 15 . The method of claim 1 , wherein the application is executing in an extended reality (XR) environment.
- 16 . The method of claim 15 , wherein the generated image data comprises a virtual object from the XR environment as viewed by the virtual image sensor.
- 17 . The method of claim 1 , wherein executing the application comprises presenting a visual representation of the application, and wherein the pose of the virtual image sensor is based on a pose of the visual representation of the application.
- 18 . The method of claim 1 , wherein the request from the executing application for image data comprises a request for depth data.
- 19 . The method of claim 1 , further comprising receiving a request from the executing application for audio data.
- 20 . The method of claim 1 , wherein the executing application is an image processing application providing multiple segments of a communication session with a second device.
Description
CROSS-REFERENCE TO RELATED APPLICATIONS This application is a continuation of International Application No. PCT/US2021/049207 filed on Sep. 7, 2021, which claims the benefit of U.S. Provisional Application No. 63/083,188 filed on Sep. 25, 2020, both entitled “SENSOR EMULATION,” each of which is incorporated herein by this reference in its entirety. TECHNICAL FIELD The present disclosure generally relates to providing image content on electronic devices and, in particular, to systems, methods, and devices that provide images based on image sensor emulation. BACKGROUND Electronic devices have different configurations of image sensors. For example, mobile devices intended generally for use from a few inches to a few feet in front of a user's face may have a front-facing camera intended to capture images of the user while the user is using the device. Other devices that are not intended for use in the same way, such as head-mounted devices (HMDs) may not have front-facing cameras that capture similar images of users. Applications designed for execution on a first type of device may be executed (e.g., via an emulator) on another type of device. However, the application's requests and other interactions with images sensors may not provide desirable results based on differences in the configurations between the different types of devices. For example, requests for images from a front-facing camera expected to be facing a user from a few inches to a few feet in front of the user may not provide desirable results in the circumstance in which the application is being emulated on a device, e.g., an HMD, that does not have a front-facing camera used in that way. SUMMARY Various implementations disclosed herein include devices, systems, and methods that execute (e.g., via an emulator) an application on a device having a first image sensor configuration where the application is intended for a device having a second image sensor configuration. For example, the application may be intended for execution on a mobile device having a front-facing camera that is generally used a few inches to a few feet in front of the user and facing the user, and may be used on a device that has a different image sensor configuration such as a device not having a front-facing camera intended to be used a few inches to a few feet in front of the user and facing the user. In some implementations, such execution involves responding to the application's requests for front-facing and rear-facing camera feeds by modifying the executing device's own image sensor data according to a virtual image sensor pose. For example, an application may include a request for a front-facing camera feed of a mobile device and a response may be provided to such a request on a device having a different image sensor configuration by emulating a front-facing image sensor feed. In one example, this involves providing a selfie view of a representation of the user from a viewpoint a few inches to a few feet in front of the user and facing the user. In another example, an application may request a mobile device's rear-facing camera feed and a response may be provided by providing a view of the environment from a position of a virtual device that is a few inches to a few feet in front of the user and facing away from the user. Various implementations disclosed herein include devices, systems, and methods that implement a virtual second image sensor in a second different image sensor configuration on a device having a first image sensor in a first image sensor configuration. In some implementations, an HMD that includes outward, inward, or downward image sensors implements a virtual front-facing image sensor or a virtual rear-facing image sensor to generate front-facing image sensor data or rear-facing image sensor data for an application being executed on the HMD. In some implementations, the HMD responds to requests from an executing application for front-facing and rear-facing camera feeds by modifying the HMD's image sensor data according to a virtual image sensor pose. For example, the HMD may emulate a front-facing device camera to provide a “selfie” view of a representation of the HMD user, e.g., a photo-realistic avatar. In another example, the HMD may emulate a rear-facing device camera to provide an image sensor feed of the physical environment or extended reality (XR) environment from a position of a virtual image sensor that is a few feet in front of the HMD user. In some implementations, the image sensor feed may be a still image, series of images, video, etc. In some implementations, the HMD executes an application that asks for the front-facing camera feed that is generally available on a smartphone, a tablet, or the like. In this situation, the HMD may automatically create a virtual image sensor in a XR environment (e.g., MR, VR, etc.) that provides a selfie picture or streams a selfie view of the HMD user's avatar for that application. In some implementation