US-12626395-B2 - Location aware visual markers
Abstract
Various implementations disclosed herein include devices, systems, and methods that determine the relative positioning (e.g., offset) between a mobile electronic device and a visual marker. In some implementations, the determined relative positioning and a known position of the visual marker are used to determine a position (e.g., geo coordinates) of the mobile electronic device that is more accurate than existing techniques. In some implementations, the determined relative positioning is used with a position of the mobile electronic device to crowd source the stored position of the visual marker. In some implementations, the determined relative positioning and a position of the visual marker are used to determine a position of an object detected in an image by the mobile electronic device. In some implementations at an electronic device having a processor, locally-determined locations of a visual marker are received from mobile electronic devices that scan a visual marker.
Inventors
- Anselm Grundhoefer
- Jeffrey S. Norris
- Mohamed Selim Ben Himane
- Paul Ewers
- Scott G. Wade
- Shih-Sang (Carnaven) Chiu
- Thomas G. Salter
- Tom Sengelaub
- Viral N. Parekh
Assignees
- APPLE INC.
Dates
- Publication Date
- 20260512
- Application Date
- 20200925
Claims (19)
- 1 . A method comprising: at a mobile electronic device having a processor: detecting a visual marker depicted in an image of a physical environment; determining, based on performance of visual inertial odometry using the image, a relative positioning between the mobile electronic device and the visual marker, wherein determining the relative positioning comprises determining a three-dimensional vector comprising both a distance and a direction from which the visual marker is disposed relative to the mobile electronic device; determining, based on data stored by a remote electronic device, a crowd-sourced location of the visual marker; determining a location of the mobile electronic device based on the relative positioning between the mobile electronic device and the visual marker and the crowd-sourced location of the visual marker; transmitting from the mobile electronic device to the remote electronic device, the location of the mobile electronic device; and receiving, from the remote electronic device: an indication of other visual markers within a threshold distance from the location of the mobile electronic device; and information related to virtual content associated with the other visual markers.
- 2 . The method of claim 1 , further comprising determining the distance using a stored size of the visual marker and a size of the visual marker depicted in the image.
- 3 . The method of claim 1 , wherein determining the relative positioning comprises: decoding a first size of the visual marker encoded in the visual marker; and determining the distance based on the first size of the visual marker encoded in the visual marker and a second size of the visual marker depicted in the image.
- 4 . The method of claim 1 , wherein determining the relative positioning comprises: determining a crowd-sourced size of the visual marker based on multiple sizes of the visual marker determined from multiple other images; and determining the distance based on the crowd-sourced size of the visual marker and a size of the visual marker depicted in the image.
- 5 . The method of claim 1 , further comprising tracking movement from the location of the mobile electronic device based on a sensor of the mobile electronic device.
- 6 . The method of claim 1 , further comprising determining the direction of the three-dimensional vector using a stored two-dimensional (2D) shape or a stored parametric description of the shape of the visual marker and a shape of the visual marker in the image.
- 7 . The method of claim 1 wherein the determining the crowd-sourced location of the visual marker comprises: requesting the data stored by the remote electronic device based on detecting the visual marker; and receiving three-dimensional (3D) coordinates identifying the crowd-sourced location of the visual marker from the remote electronic device.
- 8 . The method of claim 1 , further comprising providing virtual content in a computer-generated reality (CGR) environment based on the location of the mobile electronic device or the relative positioning between the mobile electronic device and the visual marker.
- 9 . The method of claim 1 , further comprising: determining that the visual marker was or will be positioned at the crowd-sourced location in the physical environment; and storing the crowd-sourced location of the visual marker on a separate device or encoded in the visual marker.
- 10 . The method of claim 1 , wherein the mobile electronic device stores a map identifying locations of a plurality of visual markers.
- 11 . The method of claim 1 , wherein metadata associated with the visual marker identifies the visual marker as a moving visual marker, wherein the crowd-sourced location of the visual marker is discarded after a first threshold amount of time, the method further comprising: receiving an updated crowd-sourced location of the visual marker, wherein the updated crowd-sourced location is a more recently crowd-sourced geolocation than the discarded crowd-sourced location; and determining an updated location of the mobile electronic device based on the relative positioning between the mobile electronic device and the visual marker and the updated crowd-sourced location.
- 12 . The method of claim 11 , wherein the metadata is stored on a second electronic device, wherein all requests initiated by decoding the visual marker are sent to the second electronic device.
- 13 . A mobile electronic device comprising: a non-transitory computer-readable storage medium; and one or more processors coupled to the non-transitory computer-readable storage medium, wherein the non-transitory computer-readable storage medium comprises program instructions that, when executed on the one or more processors, cause the mobile electronic device to perform operations comprising: detecting a visual marker depicted in an image of a physical environment; determining, based on performance of visual inertial odometry using the image, a relative positioning between the mobile electronic device and the visual marker, wherein determining the relative positioning comprises determining a three-dimensional vector comprising both a distance and a direction from which the visual marker is disposed relative to the mobile electronic device; determining, based on data stored by a remote electronic device, a crowd-sourced location of the visual marker; determining a location of the mobile electronic device based on the relative positioning between the mobile electronic device and the visual marker and the crowd-sourced location of the visual marker; transmitting from the mobile electronic device to the remote electronic device, the location of the mobile electronic device; and receiving, from the remote electronic device: an indication of other visual markers within a threshold distance from the location of the mobile electronic device; and information related to virtual content associated with the other visual markers.
- 14 . The mobile electronic device of claim 13 , further comprising determining the distance using a stored size of the visual marker and a size of the visual marker depicted in the image.
- 15 . The mobile electronic device of claim 13 , wherein determining the relative positioning comprises: decoding a first size of the visual marker encoded in the visual marker; and determining the distance based on the first size of the visual marker encoded in the visual marker and a second size of the visual marker depicted in the image.
- 16 . The mobile electronic device of claim 13 , wherein determining the relative positioning comprises: determining a first size of the visual marker based on respective sizes of the visual marker determined from multiple other images; and determining the distance based on the first size of the visual marker and a second size of the visual marker depicted in the image.
- 17 . A non-transitory computer-readable storage medium, storing program instructions computer-executable on a computer to perform operations comprising: at an electronic device having a processor: detecting a visual marker depicted in an image of a physical environment; determining, based on performance of visual inertial odometry using the image, a relative positioning between the electronic device and the visual marker, wherein determining the relative positioning comprises determining a three-dimensional vector comprising both a distance and a direction from which the visual marker is disposed relative to the electronic device; determining, based on data stored by a remote electronic device, a crowd-sourced location of the visual marker; determining a location of the mobile electronic device based on the relative positioning between the mobile electronic device and the visual marker and the crowd-sourced location of the visual marker; transmitting from the mobile electronic device to the remote electronic device, the location of the mobile electronic device; and receiving, from the remote electronic device: an indication of other visual markers within a threshold distance from the location of the mobile electronic device; and information related to virtual content associated with the other visual markers.
- 18 . The method of claim 1 , further comprising: determining an updated location of the visual marker based on an updated relative positioning between the mobile electronic device and the visual marker; and updating the crowd-sourced location of the visual marker based on the determined updated location of the visual marker.
- 19 . The method of claim 5 , further comprising determining a location of an object depicted in a second image based on the location of the mobile electronic device, the tracked movement, and a relative positioning between the mobile electronic device and the object depicted in the second image.
Description
CROSS-REFERENCE TO RELATED APPLICATIONS This application claims the benefit of U.S. Provisional Application Ser. No. 62/907,163 filed Sep. 27, 2019, which is incorporated herein in its entirety. TECHNICAL FIELD The present disclosure generally relates to electronic devices, and in particular, to systems, methods, and devices that involve electronic devices that capture images of visual markers to identify, share, or manage location information. BACKGROUND Visual markers exist today in the form of barcodes, Quick Response (QR) codes, and other proprietary code-based systems. QR codes encode binary data such as strings or other payloads to initiate payments, link to websites, link to location-based experiences or contextual-based experiences, or launch into other web-based experiences. SUMMARY Various implementations disclosed herein include devices, systems, and methods that determine the relative positioning (e.g., distance and direction, or offset) between a mobile electronic device and a visual marker (e.g., a visual marker including a location service or a “location aware” visual marker). In a first example, the determined relative positioning and a known position or stored position of the visual marker are used to determine a position (e.g., geo coordinates, pose, etc.) of the mobile electronic device that is more accurate than a locally-determined position of the mobile electronic device (e.g., a standalone position determined using its own sensors or received Global Positioning System (GPS) data). In some implementations, at a mobile electronic device having a processor, a visual marker is detected in an image of a physical environment. In some implementations, a visual marker with a known location (e.g., having location data stored on an accessible network location) is detected in a 2D image or 3D image captured by the mobile electronic device. Then, a relative positioning between the mobile electronic device and the visual marker is determined based on the image. In some implementations, the relative positioning determines the relative orientation of the visual marker with respect to the mobile electronic device. In some implementations, the relative positioning is determined using computer vision techniques (e.g., Visual Inertial Odometry (VIO) or Simultaneous Localization and Mapping (SLAM) or Perspective-N-Point (PNP) techniques). In some implementations, the relative positioning determines distance or direction from the mobile electronic device to the visual marker. Then, a real-world location of the mobile electronic device is determined based on the relative positioning between the mobile electronic device and the visual marker and a known location of the visual marker. The known location of the visual marker may be provided by a remote location service (e.g., in the cloud) accessed based on uniquely-identifying information captured in the image of the visual marker. Various implementations disclosed herein include devices, systems, and methods that determine the relative positioning (e.g., distance and direction, or offset) between a mobile electronic device and a visual marker. In a second example, the determined relative positioning is used with a position of the mobile electronic device (e.g., GPS) to revise the stored location associated with the deployed visual marker (e.g., crowd sourcing the stored location of the visual marker). In some implementations, a deployed visual marker is permanently mounted on or otherwise attached or affixed to a physical structure (e.g., statue or baseball stadium). In some implementations, when a visual marker is scanned by an electronic device, a new location of the visual marker (e.g., geo position) is determined. The new location may be determined by using data from the new scan with data from prior scans of the visual marker. For example, the new data may be combined or averaged with the prior data to increase the accuracy of a stored location of the visual marker. In some implementations, a crowd-sourced location of the visual marker is maintained by a remote location service accessible via the visual marker. In some implementations at a mobile electronic device having a processor, a visual marker is detected in an image of a physical environment. In some implementations, a visual marker is detected in a 2D image or 3D image from the mobile electronic device. Then, a relative positioning between the mobile electronic device and the visual marker is determined based on the image. In some implementations, the relative positioning determines the relative orientation of the visual marker with respect to the mobile electronic device 420. In some implementations, the relative positioning is determined using computer vision techniques (e.g., VIO or SLAM) or PNP techniques. In some implementations, a location of the visual marker is determined based on a location of the mobile electronic device (e.g., locally determined via GPS, etc.). Then, the locally-dete