Towards End-to-End GPS Localization with Neural Pseudorange Correction
- URL: http://arxiv.org/abs/2401.10685v2
- Date: Wed, 21 Aug 2024 06:10:02 GMT
- Title: Towards End-to-End GPS Localization with Neural Pseudorange Correction
- Authors: Xu Weng, KV Ling, Haochen Liu, Kun Cao,
- Abstract summary: We propose an end-to-end GPS localization framework, E2E-PrNet, to train a neural network for pseudorange correction (PrNet)
The feasibility of fusing the data-driven neural network and the model-based DNLS module is verified with GPS data collected by Android phones.
- Score: 6.524401246715823
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The pseudorange error is one of the root causes of localization inaccuracy in GPS. Previous data-driven methods regress and eliminate pseudorange errors using handcrafted intermediate labels. Unlike them, we propose an end-to-end GPS localization framework, E2E-PrNet, to train a neural network for pseudorange correction (PrNet) directly using the final task loss calculated with the ground truth of GPS receiver states. The gradients of the loss with respect to learnable parameters are backpropagated through a Differentiable Nonlinear Least Squares (DNLS) optimizer to PrNet. The feasibility of fusing the data-driven neural network and the model-based DNLS module is verified with GPS data collected by Android phones, showing that E2E-PrNet outperforms the baseline weighted least squares method and the state-of-the-art end-to-end data-driven approach. Finally, we discuss the explainability of E2E-PrNet.
Related papers
- Automatic Optimisation of Normalised Neural Networks [1.0334138809056097]
We propose automatic optimisation methods considering the geometry of matrix manifold for the normalised parameters of neural networks.
Our approach first initialises the network and normalises the data with respect to the $ell2$-$ell2$ gain of the initialised network.
arXiv Detail & Related papers (2023-12-17T10:13:42Z) - Object Location Prediction in Real-time using LSTM Neural Network and
Polynomial Regression [0.0]
This paper details the design and implementation of a system for predicting and interpolating object location coordinates.
Our solution is based on processing inertial measurements and global positioning system data through a Long Short-Term Memory (LSTM) neural network and regression.
arXiv Detail & Related papers (2023-11-23T12:03:02Z) - PrNet: A Neural Network for Correcting Pseudoranges to Improve
Positioning with Android Raw GNSS Measurements [7.909678289680922]
We present a neural network for mitigating biased errors in pseudoranges to improve localization performance with data collected from mobile phones.
A satellite-wise Multilayer Perceptron (MLP) is designed to regress the pseudorange bias from six satellite, receiver, context-related features.
The corrected pseudoranges are then used by a model-based localization engine to compute locations.
arXiv Detail & Related papers (2023-09-16T10:43:59Z) - Dual Accuracy-Quality-Driven Neural Network for Prediction Interval Generation [0.0]
We present a method to learn prediction intervals for regression-based neural networks automatically.
Our main contribution is the design of a novel loss function for the PI-generation network.
Experiments using a synthetic dataset, eight benchmark datasets, and a real-world crop yield prediction dataset showed that our method was able to maintain a nominal probability coverage.
arXiv Detail & Related papers (2022-12-13T05:03:16Z) - Enhanced Laser-Scan Matching with Online Error Estimation for Highway
and Tunnel Driving [0.0]
Lidar data can be used to generate point clouds for navigation of autonomous vehicles or mobile robotics platforms.
We propose the Iterative Closest Ellipsoidal Transform (ICET), a scan matching algorithm which provides two novel improvements.
arXiv Detail & Related papers (2022-07-29T13:42:32Z) - Adaptive Self-supervision Algorithms for Physics-informed Neural
Networks [59.822151945132525]
Physics-informed neural networks (PINNs) incorporate physical knowledge from the problem domain as a soft constraint on the loss function.
We study the impact of the location of the collocation points on the trainability of these models.
We propose a novel adaptive collocation scheme which progressively allocates more collocation points to areas where the model is making higher errors.
arXiv Detail & Related papers (2022-07-08T18:17:06Z) - Automatic Mapping of the Best-Suited DNN Pruning Schemes for Real-Time
Mobile Acceleration [71.80326738527734]
We propose a general, fine-grained structured pruning scheme and corresponding compiler optimizations.
We show that our pruning scheme mapping methods, together with the general fine-grained structured pruning scheme, outperform the state-of-the-art DNN optimization framework.
arXiv Detail & Related papers (2021-11-22T23:53:14Z) - Adaptive Anomaly Detection for Internet of Things in Hierarchical Edge
Computing: A Contextual-Bandit Approach [81.5261621619557]
We propose an adaptive anomaly detection scheme with hierarchical edge computing (HEC)
We first construct multiple anomaly detection DNN models with increasing complexity, and associate each of them to a corresponding HEC layer.
Then, we design an adaptive model selection scheme that is formulated as a contextual-bandit problem and solved by using a reinforcement learning policy network.
arXiv Detail & Related papers (2021-08-09T08:45:47Z) - Improving predictions of Bayesian neural nets via local linearization [79.21517734364093]
We argue that the Gauss-Newton approximation should be understood as a local linearization of the underlying Bayesian neural network (BNN)
Because we use this linearized model for posterior inference, we should also predict using this modified model instead of the original one.
We refer to this modified predictive as "GLM predictive" and show that it effectively resolves common underfitting problems of the Laplace approximation.
arXiv Detail & Related papers (2020-08-19T12:35:55Z) - Learning to Optimize Non-Rigid Tracking [54.94145312763044]
We employ learnable optimizations to improve robustness and speed up solver convergence.
First, we upgrade the tracking objective by integrating an alignment data term on deep features which are learned end-to-end through CNN.
Second, we bridge the gap between the preconditioning technique and learning method by introducing a ConditionNet which is trained to generate a preconditioner.
arXiv Detail & Related papers (2020-03-27T04:40:57Z) - A Privacy-Preserving-Oriented DNN Pruning and Mobile Acceleration
Framework [56.57225686288006]
Weight pruning of deep neural networks (DNNs) has been proposed to satisfy the limited storage and computing capability of mobile edge devices.
Previous pruning methods mainly focus on reducing the model size and/or improving performance without considering the privacy of user data.
We propose a privacy-preserving-oriented pruning and mobile acceleration framework that does not require the private training dataset.
arXiv Detail & Related papers (2020-03-13T23:52:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.