Deep GEM-Based Network for Weakly Supervised UWB Ranging Error
Mitigation
- URL: http://arxiv.org/abs/2305.13904v1
- Date: Tue, 23 May 2023 10:26:50 GMT
- Title: Deep GEM-Based Network for Weakly Supervised UWB Ranging Error
Mitigation
- Authors: Yuxiao Li, Santiago Mazuelas, Yuan Shen
- Abstract summary: We present a learning framework based on weak supervision for UWB ranging error mitigation.
Specifically, we propose a deep learning method based on the generalized expectation-maximization (GEM) algorithm for robust UWB ranging error mitigation.
- Score: 29.827191184889898
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Ultra-wideband (UWB)-based techniques, while becoming mainstream approaches
for high-accurate positioning, tend to be challenged by ranging bias in harsh
environments. The emerging learning-based methods for error mitigation have
shown great performance improvement via exploiting high semantic features from
raw data. However, these methods rely heavily on fully labeled data, leading to
a high cost for data acquisition. We present a learning framework based on weak
supervision for UWB ranging error mitigation. Specifically, we propose a deep
learning method based on the generalized expectation-maximization (GEM)
algorithm for robust UWB ranging error mitigation under weak supervision. Such
method integrate probabilistic modeling into the deep learning scheme, and
adopt weakly supervised labels as prior information. Extensive experiments in
various supervision scenarios illustrate the superiority of the proposed
method.
Related papers
- Removing the need for ground truth UWB data collection: self-supervised ranging error correction using deep reinforcement learning [1.4061979259370274]
Multipath effects and non-line-of-sight conditions cause ranging errors between anchors and tags.
Existing approaches for mitigating these ranging errors rely on collecting large labeled datasets.
This paper proposes a novel self-supervised deep reinforcement learning approach that does not require labeled ground truth data.
arXiv Detail & Related papers (2024-03-28T09:36:55Z) - Beyond Deep Ensembles: A Large-Scale Evaluation of Bayesian Deep
Learning under Distribution Shift [19.945634052291542]
We evaluate modern BDL algorithms on real-world datasets from the WILDS collection containing challenging classification and regression tasks.
We compare the algorithms on a wide range of large, convolutional and transformer-based neural network architectures.
We provide the first systematic evaluation of BDL for fine-tuning large pre-trained models.
arXiv Detail & Related papers (2023-06-21T14:36:03Z) - A Semi-Supervised Learning Approach for Ranging Error Mitigation Based
on UWB Waveform [29.827191184889898]
We propose a semi-supervised learning method based on variational Bayes for UWB ranging error mitigation.
Our method can efficiently accumulate knowledge from both labeled and unlabeled data samples.
arXiv Detail & Related papers (2023-05-23T10:08:42Z) - Efficient Deep Reinforcement Learning Requires Regulating Overfitting [91.88004732618381]
We show that high temporal-difference (TD) error on the validation set of transitions is the main culprit that severely affects the performance of deep RL algorithms.
We show that a simple online model selection method that targets the validation TD error is effective across state-based DMC and Gym tasks.
arXiv Detail & Related papers (2023-04-20T17:11:05Z) - Uncertainty Estimation by Fisher Information-based Evidential Deep
Learning [61.94125052118442]
Uncertainty estimation is a key factor that makes deep learning reliable in practical applications.
We propose a novel method, Fisher Information-based Evidential Deep Learning ($mathcalI$-EDL)
In particular, we introduce Fisher Information Matrix (FIM) to measure the informativeness of evidence carried by each sample, according to which we can dynamically reweight the objective loss terms to make the network more focused on the representation learning of uncertain classes.
arXiv Detail & Related papers (2023-03-03T16:12:59Z) - Cluster-level pseudo-labelling for source-free cross-domain facial
expression recognition [94.56304526014875]
We propose the first Source-Free Unsupervised Domain Adaptation (SFUDA) method for Facial Expression Recognition (FER)
Our method exploits self-supervised pretraining to learn good feature representations from the target data.
We validate the effectiveness of our method in four adaptation setups, proving that it consistently outperforms existing SFUDA methods when applied to FER.
arXiv Detail & Related papers (2022-10-11T08:24:50Z) - Disambiguation of weak supervision with exponential convergence rates [88.99819200562784]
In supervised learning, data are annotated with incomplete yet discriminative information.
In this paper, we focus on partial labelling, an instance of weak supervision where, from a given input, we are given a set of potential targets.
We propose an empirical disambiguation algorithm to recover full supervision from weak supervision.
arXiv Detail & Related papers (2021-02-04T18:14:32Z) - Robust Ultra-wideband Range Error Mitigation with Deep Learning at the
Edge [0.0]
Multipath effects, reflections, refractions, and complexity of the indoor radio environment can introduce a positive bias in the ranging measurement.
This article proposes an efficient representation learning methodology that exploits the latest advancement in deep learning and graph optimization techniques.
Channel Impulse Response (CIR) signals are directly exploited to extract high semantic features to estimate corrections in either NLoS or LoS conditions.
arXiv Detail & Related papers (2020-11-30T10:52:21Z) - Semi-Supervised Learning with Meta-Gradient [123.26748223837802]
We propose a simple yet effective meta-learning algorithm in semi-supervised learning.
We find that the proposed algorithm performs favorably against state-of-the-art methods.
arXiv Detail & Related papers (2020-07-08T08:48:56Z) - Non-Negative Bregman Divergence Minimization for Deep Direct Density
Ratio Estimation [18.782750537161615]
We propose a non-negative correction for empirical BD estimators to mitigate train-loss hacking.
We show that the proposed methods show a favorable performance in inlier-based outlier detection.
arXiv Detail & Related papers (2020-06-12T07:39:03Z) - Learning the Truth From Only One Side of the Story [58.65439277460011]
We focus on generalized linear models and show that without adjusting for this sampling bias, the model may converge suboptimally or even fail to converge to the optimal solution.
We propose an adaptive approach that comes with theoretical guarantees and show that it outperforms several existing methods empirically.
arXiv Detail & Related papers (2020-06-08T18:20:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.