RIOT: Recursive Inertial Odometry Transformer for Localisation from
Low-Cost IMU Measurements
- URL: http://arxiv.org/abs/2303.01641v1
- Date: Fri, 3 Mar 2023 00:20:01 GMT
- Title: RIOT: Recursive Inertial Odometry Transformer for Localisation from
Low-Cost IMU Measurements
- Authors: James Brotchie, Wenchao Li, Andrew D. Greentree, Allison Kealy
- Abstract summary: We present two end-to-end frameworks for pose invariant deep inertial odometry that utilise self-attention to capture both spatial features and long-range dependencies in inertial data.
We evaluate our approaches against a custom 2-layer Gated Recurrent Unit, trained in the same manner on the same data, and tested each approach on a number of different users, devices and activities.
- Score: 5.770538064283154
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Inertial localisation is an important technique as it enables ego-motion
estimation in conditions where external observers are unavailable. However,
low-cost inertial sensors are inherently corrupted by bias and noise, which
lead to unbound errors, making straight integration for position intractable.
Traditional mathematical approaches are reliant on prior system knowledge,
geometric theories and are constrained by predefined dynamics. Recent advances
in deep learning, that benefit from ever-increasing volumes of data and
computational power, allow for data driven solutions that offer more
comprehensive understanding. Existing deep inertial odometry solutions rely on
estimating the latent states, such as velocity, or are dependant on fixed
sensor positions and periodic motion patterns. In this work we propose taking
the traditional state estimation recursive methodology and applying it in the
deep learning domain. Our approach, which incorporates the true position priors
in the training process, is trained on inertial measurements and ground truth
displacement data, allowing recursion and to learn both motion characteristics
and systemic error bias and drift. We present two end-to-end frameworks for
pose invariant deep inertial odometry that utilise self-attention to capture
both spatial features and long-range dependencies in inertial data. We evaluate
our approaches against a custom 2-layer Gated Recurrent Unit, trained in the
same manner on the same data, and tested each approach on a number of different
users, devices and activities. Each network had a sequence length weighted
relative trajectory error mean $\leq0.4594$m, highlighting the effectiveness of
our learning process used in the development of the models.
Related papers
- You are out of context! [0.0]
New data can act as forces stretching, compressing, or twisting the geometric relationships learned by a model.
We propose a novel drift detection methodology for machine learning (ML) models based on the concept of ''deformation'' in the vector space representation of data.
arXiv Detail & Related papers (2024-11-04T10:17:43Z) - Localized Gaussians as Self-Attention Weights for Point Clouds Correspondence [92.07601770031236]
We investigate semantically meaningful patterns in the attention heads of an encoder-only Transformer architecture.
We find that fixing the attention weights not only accelerates the training process but also enhances the stability of the optimization.
arXiv Detail & Related papers (2024-09-20T07:41:47Z) - DynST: Dynamic Sparse Training for Resource-Constrained Spatio-Temporal
Forecasting [24.00162014044092]
Earth science systems rely heavily on the extensive deployment of sensors.
Traditional approaches to sensor deployment utilize specific algorithms to design and deploy sensors.
In this paper, we introduce for the first time the concept of dynamic sparse training and are committed to adaptively, dynamically filtering important sensor data.
arXiv Detail & Related papers (2024-03-05T12:31:24Z) - Assessing Neural Network Representations During Training Using
Noise-Resilient Diffusion Spectral Entropy [55.014926694758195]
Entropy and mutual information in neural networks provide rich information on the learning process.
We leverage data geometry to access the underlying manifold and reliably compute these information-theoretic measures.
We show that they form noise-resistant measures of intrinsic dimensionality and relationship strength in high-dimensional simulated data.
arXiv Detail & Related papers (2023-12-04T01:32:42Z) - Large-Scale OD Matrix Estimation with A Deep Learning Method [70.78575952309023]
The proposed method integrates deep learning and numerical optimization algorithms to infer matrix structure and guide numerical optimization.
We conducted tests to demonstrate the good generalization performance of our method on a large-scale synthetic dataset.
arXiv Detail & Related papers (2023-10-09T14:30:06Z) - Conditional Kernel Imitation Learning for Continuous State Environments [9.750698192309978]
We introduce a novel conditional kernel density estimation-based imitation learning framework.
We show consistently superior empirical performance over many state-of-the-art IL algorithms.
arXiv Detail & Related papers (2023-08-24T05:26:42Z) - STGlow: A Flow-based Generative Framework with Dual Graphormer for
Pedestrian Trajectory Prediction [22.553356096143734]
We propose a novel generative flow based framework with dual graphormer for pedestrian trajectory prediction (STGlow)
Our method can more precisely model the underlying data distribution by optimizing the exact log-likelihood of motion behaviors.
Experimental results on several benchmarks demonstrate that our method achieves much better performance compared to previous state-of-the-art approaches.
arXiv Detail & Related papers (2022-11-21T07:29:24Z) - Transformer Inertial Poser: Attention-based Real-time Human Motion
Reconstruction from Sparse IMUs [79.72586714047199]
We propose an attention-based deep learning method to reconstruct full-body motion from six IMU sensors in real-time.
Our method achieves new state-of-the-art results both quantitatively and qualitatively, while being simple to implement and smaller in size.
arXiv Detail & Related papers (2022-03-29T16:24:52Z) - Locally Aware Piecewise Transformation Fields for 3D Human Mesh
Registration [67.69257782645789]
We propose piecewise transformation fields that learn 3D translation vectors to map any query point in posed space to its correspond position in rest-pose space.
We show that fitting parametric models with poses by our network results in much better registration quality, especially for extreme poses.
arXiv Detail & Related papers (2021-04-16T15:16:09Z) - IDOL: Inertial Deep Orientation-Estimation and Localization [18.118289074111946]
Many smartphone applications use inertial measurement units (IMUs) to sense movement, but the use of these sensors for pedestrian localization can be challenging.
Recent data-driven inertial odometry approaches have demonstrated the increasing feasibility of inertial navigation.
We present a two-stage, data-driven pipeline using a commodity smartphone that first estimates device orientations and then estimates device position.
arXiv Detail & Related papers (2021-02-08T06:41:47Z) - Understanding Self-Training for Gradual Domain Adaptation [107.37869221297687]
We consider gradual domain adaptation, where the goal is to adapt an initial classifier trained on a source domain given only unlabeled data that shifts gradually in distribution towards a target domain.
We prove the first non-vacuous upper bound on the error of self-training with gradual shifts, under settings where directly adapting to the target domain can result in unbounded error.
The theoretical analysis leads to algorithmic insights, highlighting that regularization and label sharpening are essential even when we have infinite data, and suggesting that self-training works particularly well for shifts with small Wasserstein-infinity distance.
arXiv Detail & Related papers (2020-02-26T08:59:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.