US-12626419-B2 - Geographic augmented reality design for low accuracy scenarios
Abstract
To present augmented reality features without localizing a user, a client device receives a request for presenting augmented reality features in a camera view of a computing device of the user. Prior to localizing the user, the client device obtains sensor data indicative of a pose of the user, and determines the pose of the user based on the sensor data with a confidence level that exceeds a confidence threshold which indicates a low accuracy state. Then the client device presents one or more augmented reality features in the camera view in accordance with the determined pose of the user while in the low accuracy state.
Inventors
- Mohamed Suhail Mohamed Yousuf Sait
- Matt Seegmiller
- Andre LE
- Juan David Hincapie
- Mirko Ranieri
- Marek Gorecki
- Wenli Zhao
- TONY SHIH
- Bo Zhang
- Alan Sheridan
Assignees
- GOOGLE LLC
Dates
- Publication Date
- 20260512
- Application Date
- 20240206
Claims (20)
- 1 . A method for presenting augmented reality features, the method comprising: receiving, by one or more processors, a request for presenting augmented reality features in a camera view of a computing device of the user; obtaining, by the one or more processors, sensor data indicative of a location of the user; determining, by the one or more processors, the location of the user based on the sensor data with a confidence level that exceeds a confidence threshold which indicates a low accuracy state; presenting, by the one or more processors within a predetermined amount of time from receiving the request, one or more augmented reality features for a point of interest in the camera view in accordance with the determined location of the user while in the low accuracy state, wherein the one or more augmented reality features have a lower degree of precision than one or more additional or updated augmented reality features presented for the same point of interest while in a high accuracy state; and presenting, by the one or more processors while in the low accuracy state, localization instructions for improving the location accuracy to transition from the low accuracy state to the high accuracy state.
- 2 . The method of claim 1 , further comprising: after presenting the one or more augmented reality features while in the low accuracy state, localizing the user by determining an updated location of the user with a confidence level within the confidence threshold which indicates the high accuracy state; and presenting, by the one or more processors, the one or more additional or updated augmented reality features in the camera view in accordance with the updated location of the user while in the high accuracy state.
- 3 . The method of claim 2 , wherein determining the updated location of the user with the confidence level within the confidence threshold includes: providing, by the one or more processors, an image of the camera view and an indication of the determined location to a server device, wherein the server device compares the image of the camera view to street-level imagery the determined location to localize the user; and receiving, by the one or more processors, the updated location of the user from the server device based on the comparison.
- 4 . The method of claim 1 , wherein determining the location of the user includes: generating, by the one or more processors, a geographic anchor; generating, by the one or more processors, a visual inertial odometry (VIO) anchor at an initial location of the geographic anchor; and determining, by the one or more processors, the location of the user based on the VIO anchor.
- 5 . The method of claim 1 , further comprising: receiving, by the one or more processors, a request for navigation directions to a destination location, wherein the one or more augmented reality features include an indicator of a direction to travel in to reach the destination location.
- 6 . The method of claim 1 , wherein the location of the user is determined based on the sensor data using a particle filter.
- 7 . The method of claim 1 , wherein obtaining the sensor data includes obtaining the sensor data from at least one of: an accelerometer, a positioning sensor, a transceiver, a gyroscope, a compass, or a magnetometer.
- 8 . The method of claim 1 , wherein the one or more augmented reality features includes an indication of an orientation of a landmark that is not visible within the camera view.
- 9 . A computing device for presenting augmented reality features, the computing device comprising: a camera; one or more processors; and a computer-readable memory coupled to the camera and the one or more processors and storing instructions thereon that, when executed by the one or more processors, cause the computing device to: receive a request for presenting augmented reality features in a camera view of the camera; obtain sensor data indicative of a location of the user; determine the location of the user based on the sensor data with a confidence level that exceeds a confidence threshold which indicates a low accuracy state; present within a predetermined amount of time from receiving the request one or more augmented reality features for a point of interest in the camera view in accordance with the determined location of the user while in the low accuracy state, wherein the one or more augmented reality features have a lower degree of precision than one or more additional or updated augmented reality features presented for the same point of interest while in a high accuracy state; and present, while in the low accuracy state, localization instructions for improving the location accuracy to transition from the low accuracy state to the high accuracy state.
- 10 . The computing device of claim 9 , wherein the instructions further cause the computing device to: after presenting the one or more augmented reality features while in the low accuracy state, localize the user by determining an updated location of the user with a confidence level within the confidence threshold which indicates the high accuracy state; and present the one or more additional or updated augmented reality features in the camera view in accordance with the updated location of the user while in the high accuracy state.
- 11 . The computing device of claim 10 , wherein to determine the updated location of the user with the confidence level within the confidence threshold, the instructions cause the computing device to: provide an image of the camera view and an indication of the determined location to a server device, wherein the server device compares the image of the camera view to street-level imagery at the determined location to localize the user; and receive the updated location of the user from the server device based on the comparison.
- 12 . The computing device of claim 9 , wherein to determine the location of the user, the instructions further cause the computing device to: generate a geographic anchor; generate a visual inertial odometry (VIO) anchor at an initial location of the geographic anchor; and determine the location of the user based on the VIO anchor.
- 13 . The computing device of claim 9 , wherein the instructions further cause the computing device to: receive a request for navigation directions to a destination location, wherein the one or more augmented reality features include an indicator of a direction to travel in to reach the destination location.
- 14 . The computing device of claim 9 , wherein the location of the user is determined based on the sensor data using a particle filter.
- 15 . The computing device of claim 9 , wherein the sensor data is obtained from at least one of: an accelerometer, a positioning sensor, a transceiver, a gyroscope, a compass, or a magnetometer.
- 16 . The computing device of claim 9 , wherein the one or more augmented reality features includes an indication of an orientation of a landmark that is not visible within the camera view.
- 17 . A non-transitory computer-readable medium storing instructions thereon that, when executed by one or more processors, cause the one or more processors to: receive a request for presenting augmented reality features in a camera view of a computing device of a user; obtain sensor data indicative of a location of the user; determine the location of the user based on the sensor data with a confidence level that exceeds a confidence threshold which indicates a low accuracy state; present within a predetermined amount of time from receiving the request one or more augmented reality features for a point of interest in the camera view in accordance with the determined location of the user while in the low accuracy state, wherein the one or more augmented reality features have a lower degree of precision than one or more additional or alternative augmented reality features presented for the same point of interest while in a high accuracy state; and present, while in the low accuracy state, localization instructions for improving the location accuracy to transition from the low accuracy state to the high accuracy state.
- 18 . The non-transitory computer-readable medium of claim 17 , wherein the instructions further cause the one or more processors to: after presenting the one or more augmented reality features while in the low accuracy state, localize the user by determining an updated location of the user with a confidence level within the confidence threshold which indicates the high accuracy state; and present the one or more additional or updated augmented reality features in the camera view in accordance with the updated location of the user while in the high accuracy state.
- 19 . The non-transitory computer-readable medium of claim 18 , wherein to determine the updated location of the user with the confidence level within the confidence threshold, the instructions cause the one or more processors to: provide an image of the camera view and an indication of the determined location to a server device, wherein the server device compares the image of the camera view to street-level imagery at the determined location to localize the user; and receive the updated location of the user from the server device based on the comparison.
- 20 . The non-transitory computer-readable medium of claim 17 , wherein the instructions further cause the one or more processors to: generate a geographic anchor; generate a visual inertial odometry (VIO) anchor at an initial location of the geographic anchor; and determine the location of the user based on the VIO anchor.
Description
CROSS-REFERENCE TO RELATED APPLICATIONS This application is a continuation of U.S. patent application Ser. No. 17/482,303 entitled “Geographic Augmented Reality Design for Low Accuracy Scenarios,” filed on Sep. 22, 2021, the entire contents of which is hereby expressly incorporated herein by reference. FIELD OF THE DISCLOSURE The present disclosure relates to augmented reality systems and, more particularly, to providing augmented reality features when a user cannot be located with pinpoint accuracy. BACKGROUND The background description provided herein is for the purpose of generally presenting the context of the disclosure. Work of the presently named inventors, to the extent it is described in this background section, as well as aspects of the description that may not otherwise qualify as prior art at the time of filing, are neither expressly nor impliedly admitted as prior art against the present disclosure. Today, augmented reality applications require a user to be located with pinpoint accuracy. To locate the user with such high precision, these applications may require the user to perform certain steps, such as pointing their client device at neighboring buildings or geographic features, which the user may have difficulty performing and thus may be unable to effectively use the augmented reality applications. Additionally, these applications may take a long time to precisely locate the user and often timeout without being able to present any augmented reality features. SUMMARY To reduce the amount of time it takes to present augmented reality (AR) content and increase the number instances in which augmented reality content may be presented to users, a geographic augmented reality system presents augmented reality content in two states: a low accuracy state (also referred to herein as an “instant mode”) and a high accuracy state (also referred to herein as a “high accuracy mode”). The geographic augmented reality system determines the pose of the user based on sensor data from the user's client device. The sensor data may include sensor data from a positioning sensor such as a global positioning system (GPS) sensor, an accelerometer, a gyroscope, a compass, a magnetometer, a transceiver that receives wireless signals from nearby devices, a camera that captures an image of the current camera view, or any other suitable sensors within the client device. The geographic augmented reality system determines the pose of the user with a confidence level indicative of the accuracy of the pose determination. Then the geographic augmented reality system determines the accuracy state based on the confidence level. More specifically, when the confidence level for the pose is within a confidence threshold (e.g., 25 degrees), the geographic augmented reality system may determine that the client device is in the high accuracy state. On the other hand, when the confidence level exceeds the confidence threshold (e.g., 25 degrees), the geographic augmented reality system may determine that the client device is in the low accuracy state. In some implementations, the geographic augmented reality system determines that the client device is in the low accuracy state when the confidence level exceeds a first confidence threshold (e.g., 25 degrees) but is within a second confidence threshold (e.g., 55 degrees). When the confidence level exceeds the second confidence threshold, the geographic augmented reality system may not present any augmented reality content. In any event, the geographic augmented reality system presents different augmented reality content depending on the accuracy state. For example, when the client device is in the low accuracy state, and the user requests navigation directions to a destination location, the geographic augmented reality system may present an augmented reality feature overlaying the user's camera view that indicates the direction of the destination location relative to the user. The geographic augmented reality system may also present an indicator of the low accuracy state, and the indicator may include a user control, which when selected, may provide instructions for entering the high accuracy state. In another example, when the client device is in the low accuracy state, the geographic augmented reality system may present an augmented reality feature overlaying the user's camera view that indicates the direction of a landmark which may not be visible in the camera view. The landmark may be a landmark which is familiar to the user, such as the user's home or a well-known building such as the Empire State Building. In this manner, the geographic augmented reality system may help orient the user while in the low accuracy state. When the client device is in the high accuracy state and the user requests navigation directions to a destination location, the geographic augmented reality system may present a different augmented reality feature than the augmented reality feature when the client d