US-12618678-B2 - Smartphone-based inertial odometry
Abstract
A computer implemented system useful for determining turns in a user's trajectory, including one or more processors; one or more memories; and one or more programs stored in the one or more memories, wherein the one or more programs executed by the one or more processors (1) receive input data, comprising orientation data and acceleration from one or more sensors carried by a user taking steps along a trajectory; (2) analyze the input data using a first trained machine learning algorithm so as to detect a plurality of n straight sections when the user is walking along an approximately straight path; (3) comprise an orientation tracker tracking an orientation of the user in each of the n straight sections, wherein the orientation comprises an estimated orientation taking into account drift of the input data outputted from the one or more sensors; and (4) comprise a turn detector detecting each of the turns, wherein each of the turns is a change in the estimated orientation of the user in the n th straight section as compared to the estimated orientation in the (n−1) th straight section.
Inventors
- Peng Ren
- Fatemeh Elyasi
- Roberto Manduchi
Assignees
- THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
Dates
- Publication Date
- 20260505
- Application Date
- 20220613
Claims (20)
- 1 . A navigation system determining turns in a user's trajectory, comprising: a smartphone comprising a display and one or more sensors and at least comprising or coupled to one or more processors; one or more memories; and one or more programs stored in the one or more memories, wherein the one or more programs executed by the one or more processors carry out the following acts: receiving input data, comprising orientation data and acceleration from the one or more sensors carried by a user taking steps along a trajectory; detecting a plurality of n straight sections in the trajectory, where n is an integer, each of the straight sections corresponding to the user walking along a substantially straight or linear path; generating and tracking an orientation of the user in each of the n straight sections, wherein the orientation comprises an estimated orientation taking into account drift of the input data outputted from the one or more sensors; detecting one or more turns in the trajectory, wherein each of the turns is a change in the estimated orientation of the user in the n th one of the straight sections as compared to the estimated orientation in the (n−1) th one of the straight sections; and providing navigation instructions from the smartphone to the user using the trajectory generated using the one or more turns; and wherein detecting the straight sections further comprises the one or more programs: storing the input data in a database; transforming the input data into trajectory detection data processable by a first machine learning module; classifying the trajectory detection data as representing motion in one of the straight sections or in a non-straight section using the first machine learning module; and labelling one or more values of the trajectory detection data as being associated with one of the straight sections if the one or more values are classified by the first machine learning module as being associated with the one of the straight sections.
- 2 . The system of claim 1 , wherein the first machine learning module is trained using training data comprising at least WeAllWalk™ data, or the acceleration and the orientation of pedestrians comprising blind or visually impaired persons walking using a walking aid.
- 3 . The system of claim 1 , wherein the first machine learning module comprises a GRU neural network.
- 4 . The system of claim 1 , wherein the first machine learning module comprises a recurrent neural network trained to identify, from the acceleration and the orientation data comprising an azimuthal angle, each of the straight sections comprising one or more time intervals during which the user walks regularly or substantially on a straight path.
- 5 . The system of claim 1 , wherein the first machine learning module is trained to disregard changes in the orientation resulting from the user comprising a visually impaired user stopping and rotating their body to re-orient or swerving to avoid a perceived obstacle.
- 6 . The system of claim 1 , further comprising detecting each of the straight sections after the one or more programs sample the orientation data for no more than 1 second.
- 7 . The system of claim 1 , wherein the turn by an angle of 90° is tracked as two consecutive turns by 45°.
- 8 . The system of claim 1 , wherein the trajectory is determined without reference to a map of an environment in which the user is moving.
- 9 . The system of claim 1 , wherein the one or more programs: receive a map of an environment in which the user is moving, the map identifying one or more impenetrable walls, and determine the trajectory by comparing the trajectory to the map and eliminating one or more paths in the trajectory that traverse the one or more impenetrable walls.
- 10 . The system of claim 9 , wherein the one or more programs: receive or obtain velocity vectors of the user from the input data or another source; generate posterior locations of the user from the velocity vectors using a particle filtering module; and generate a mean shift estimating locations of the user corresponding to highest modes of the posterior locations to obtain estimated locations; and generate the trajectory by linking pairs of the estimated locations that share the largest number of the highest modes.
- 11 . A navigation system determining turns in a user's trajectory, comprising: a smartphone comprising a display and one or more sensors and at least comprising or coupled to one or more processors; one or more memories; and one or more programs stored in the one or more memories, wherein the one or more programs executed by the one or more processors carry out the following acts: receiving input data, comprising orientation data and acceleration from the one or more sensors carried by a user taking steps along a trajectory; detecting a plurality of n straight sections in the trajectory, where n is an integer, each of the straight sections corresponding to the user walking along a substantially straight or linear path; generating and tracking an orientation of the user in each of the n straight sections, wherein the orientation comprises an estimated orientation taking into account drift of the input data outputted from the one or more sensors; detecting one or more turns in the trajectory, wherein each of the turns is a change in the estimated orientation of the user in the n th one of the straight sections as compared to the estimated orientation in the (n−1) th one of the straight sections; and providing navigation instructions from the smartphone to the user using the trajectory generated using the one or more turns, wherein the one or more programs: transform coordinates of the input data into a heading agnostic reference frame, to obtain heading agnostic data; detect the steps as detected steps by associating impulses in the heading agnostic data with heel strikes using a machine learning module; count a number of the detected steps by associating the steps with a stride length, so as to output step data; determine the trajectory using the detected steps and the turns; and the trajectory comprises: one or more step displacement vectors defined at each of the detected steps as having a first length equal to the stride length and a direction comprising an azimuthal angle obtained from the input data; and one or more turn displacement vectors defined as having a second length equal to the step length and with the direction determined from the turns detected.
- 12 . The system of claim 11 , further comprising the one or more programs: transforming the heading agnostic data into trajectory detection data processable by the machine learning module; and at least classifying or recognizing one or more values of the trajectory detection data as being associated with the steps using the machine learning module, or counting the steps using the machine learning module.
- 13 . The system of claim 12 , wherein the machine learning module is trained using reference trajectory data outputted from another machine learning module identifying the user's trajectory.
- 14 . The system of claim 12 , wherein the machine learning module comprises an LSTM neural network comprising no more than 2 layers and a hidden unit size of no more than 6.
- 15 . A method for determining turns in a user's trajectory, comprising, using a smart phone comprising a display and one or more sensors and at least comprising or coupled to a computer comprising one or more processors; one or more memories; and one or more programs stored in the one or more memories: capturing input data, comprising orientation data and acceleration data from the one or more sensors in the smartphone carried by a user taking steps along a trajectory; detecting a plurality of n straight sections in the trajectory, each of the straight sections corresponding to the user walking along a substantially straight path or a linear path; generating and tracking an orientation of the user in each of the n straight sections, wherein the orientation comprises an estimated orientation taking into account drift of the input data outputted from the one or more sensors; and detecting one or more turns in the trajectory, wherein each of the turns is a change in the estimated orientation of the user in the n th one of the straight sections as compared to the estimated orientation in the (n−1) th one of the straight sections; storing the input data in a database on the computer; transforming the input data into trajectory detection data processable by a first machine learning module; classifying the trajectory detection data as representing motion in one of the straight sections or in a non-straight section using the first machine learning module; labelling one or more values of the trajectory detection data as being associated with one of the straight sections if the one or more values are classified by the first machine learning module as being associated with the one of the straight sections; and providing navigation instructions from the smartphone to the user using the trajectory generated from the one or more turns.
- 16 . The method of claim 15 , further comprising training the first machine learning module for detecting at least one of the straight sections or the turns in the trajectory, comprising: collecting a set of first pedestrian data, the first pedestrian data comprising the orientation data and the acceleration data for one or more walking pedestrians; applying one or more transformations to the first pedestrian data including smoothing to create a modified set of pedestrian data; creating a first training set from the modified set, comprising labelled straight walking sections and labeled non-straight walking sections; and training the first machine learning module using the first training set to identify the straight sections in the trajectory detection data using the orientation data and the acceleration data, the straight sections each corresponding to the user walking along the linear path.
- 17 . The method of claim 16 , wherein the first training set comprises the modified set comprising data for the walking pedestrians comprising blind or visually impaired persons walking using a walking aid comprising at least one of a cane or a guide dog.
- 18 . The method of claim 17 , wherein the first pedestrian data comprises a WeAllWalk™ data set.
- 19 . The method of claim 16 , wherein: the applying of the transformations comprises removing data, from the first pedestrian data, associated with a single 45° turn or 90° turns associated with a 45° turn; and the training comprises training the first machine learning module, or another machine learning module, to detect, identify, or classify the turns in the trajectory comprising 90° turns.
- 20 . The method of claim 16 , further comprising: creating a second training set from the modified set, the second training set comprising orientation turns between adjacent ones of the straight sections; training the first machine learning module, or another machine learning module, to detect, classify, or identify the turns in the trajectory using the second training set.
Description
CROSS REFERENCE TO RELATED APPLICATIONS This application claims the benefit under 35 U.S.C. Section 119(e) of and commonly-assigned U.S. provisional patent application Ser. No. 63/209,853 filed Jun. 11, 2021 and U.S. provisional patent application Ser. No. 63/339,778 filed May 9, 2022, both applications by Peng Ren, Fatemeh Elyasi, and Roberto Manduchi, and both applications entitled “SMARTPHONE-BASED INERTIAL ODOMETRY,” client reference 2021-594, both of which applications are incorporated by reference herein. STATEMENT REGARDING FEDERALLY SPONSORED RESEARCH AND DEVELOPMENT This invention was made with Government support under Grant No. R01 EY029260-01, awarded by the National Institutes of Health. The Government has certain rights in the invention. BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to inertial odometry. 2. Description of Related Art Smartphone-based odometry systems for pedestrian tracking in indoor, GPS-denied environments have received considerable attention in recent years. These systems may help a person reach a gate in an airport [1] or a shop in a mall [2], navigate a museum [3], or find one's car in a parking lot [4]. Among the various approaches considered in the literature, technology based on inertial sensors have a number of practical advantages. For example, inertial-based odometry does not require the installation of infrastructure such as Bluetooth low energy (BLE) beacons [5]. In addition, no prior calibration (“fingerprinting”) is necessary, unlike for systems based on Wi-Fi [6] or BLE beacons. Compared with systems that use a camera to determine the user's location (visual-based odometry [7]), and that, thus, require good un-occluded visibility of the scene, inertial systems are able to track the user even when they keep the phone in their pocket. The downside of this modality is that the user's location is tracked by integrating inertial data, which leads to possibly large errors due to accumulated drift. A number of strategies to deal with drift have been proposed, including zero-velocity updates [8], spatial constraints (e.g., Bayes filtering using a map of the environment [9]), and machine learning [10]. Multiple well-calibrated inertial datasets (containing data from accelerometer and gyros) collected from regular smartphones carried by human walkers have been made available in recent years [10-12]. Pedestrian Dead Reckoning (PDR) Perhaps the simplest method to track the location of a walker is to count steps while measuring the user's orientation at all times [22-28]. Step counting is traditionally performed by finding peaks or other features in acceleration or rotation rate signals (e.g., [29-31]). More recently, recurrent neural networks (RNN) have been proposed as a robust alternative to “hand-crafted” algorithms [32-34]. The orientation of the phone can be obtained by proper integration of the data from the accelerometer and gyro [35, 36], but this typically results in accumulated drift. Turns can be detected, for example, by measuring short-time variations of the azimuth angle (or of the rotation rate from the gyro [37]), which are unaffected by slowly varying drift. Flores et al [38] proposed a system based on dynamic programming to estimate the discrete walker's orientation along with drift. Although effective in the tests of Flores et al [38] with sighted walkers, this algorithm gave poor results with blind walkers [16]. Another problem is how to decouple the orientation of the phone from the direction of walking. A number of algorithms for the estimation of the direction of walking, independently of the orientation of the phone, have been developed [39-42]. Another topic of interest is the robust detection of steps and of stride lengths, which are used as a proxy for the walker's velocity [29-31], [43-48]. When a map of the environment is available, it may provide a strong constraint for the space of possible trajectories. Bayes filtering (in particular, particle filtering [9]) is normally used in these situations [49-52]. Learning-Based Odometry In recent years, a number of data-driven techniques for odometry, that rely less on models and more on machine learning, have emerged. For example, RIDI [11] regresses user velocity from the time series of linear accelerations and angular velocities. IONet [53] uses a deep neural network to compute user velocity and heading. RoNIN [10] processes inertial data in a heading-agnostic reference frame using a variety of deep network architectures. Importantly, RoNIN is able to decouple the phone's orientation from the user's orientation when walking. This means that tracking is unaffected by any possible repositioning of the phone (e.g., if the user moves the phone to a different pocket). A number of other learning-based algorithms for computing the walker's velocity, or for detecting steps and measuring stride lengths, have been recently proposed [33, 34, 54-61]. Inertial Navigation for