Deep Generative Model for Simultaneous Range Error Mitigation and
Environment Identification
- URL: http://arxiv.org/abs/2305.18206v1
- Date: Tue, 23 May 2023 10:16:22 GMT
- Title: Deep Generative Model for Simultaneous Range Error Mitigation and
Environment Identification
- Authors: Yuxiao Li, Santiago Mazuelas, Yuan Shen
- Abstract summary: This paper proposes a deep generative model (DGM) for simultaneous range error mitigation and environment identification.
Experiments on a general Ultra-wideband dataset demonstrate the superior performance on range error mitigation, scalability to different environments, and novel capability on simultaneous environment identification.
- Score: 29.827191184889898
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Received waveforms contain rich information for both range information and
environment semantics. However, its full potential is hard to exploit under
multipath and non-line-of-sight conditions. This paper proposes a deep
generative model (DGM) for simultaneous range error mitigation and environment
identification. In particular, we present a Bayesian model for the generative
process of the received waveform composed by latent variables for both
range-related features and environment semantics. The simultaneous range error
mitigation and environment identification is interpreted as an inference
problem based on the DGM, and implemented in a unique end-to-end learning
scheme. Comprehensive experiments on a general Ultra-wideband dataset
demonstrate the superior performance on range error mitigation, scalability to
different environments, and novel capability on simultaneous environment
identification.
Related papers
- Adaptive Learning of the Latent Space of Wasserstein Generative Adversarial Networks [7.958528596692594]
We propose a novel framework called the latent Wasserstein GAN (LWGAN)
It fuses the Wasserstein auto-encoder and the Wasserstein GAN so that the intrinsic dimension of the data manifold can be adaptively learned.
We show that LWGAN is able to identify the correct intrinsic dimension under several scenarios.
arXiv Detail & Related papers (2024-09-27T01:25:22Z) - Caution for the Environment: Multimodal Agents are Susceptible to Environmental Distractions [68.92637077909693]
This paper investigates the faithfulness of multimodal large language model (MLLM) agents in the graphical user interface (GUI) environment.
A general setting is proposed where both the user and the agent are benign, and the environment, while not malicious, contains unrelated content.
Experimental results reveal that even the most powerful models, whether generalist agents or specialist GUI agents, are susceptible to distractions.
arXiv Detail & Related papers (2024-08-05T15:16:22Z) - Mining Invariance from Nonlinear Multi-Environment Data: Binary Classification [2.0528878959274883]
This paper focuses on binary classification to shed light on general nonlinear data generation mechanisms.
We identify a unique form of invariance that exists solely in a binary setting that allows us to train models invariant over environments.
We propose a prediction method and conduct experiments using real and synthetic datasets.
arXiv Detail & Related papers (2024-04-23T17:26:59Z) - LITE: Modeling Environmental Ecosystems with Multimodal Large Language Models [25.047123247476016]
LITE is a large language model for environmental ecosystems modeling.
It unifies different environmental variables by transforming them into natural language descriptions and line graph images.
During this step, the incomplete features are imputed by a sparse Mixture-of-Experts framework.
arXiv Detail & Related papers (2024-04-01T15:14:07Z) - SpReME: Sparse Regression for Multi-Environment Dynamic Systems [6.7053978622785415]
We develop a method of sparse regression dubbed SpReME to discover the major dynamics that underlie multiple environments.
We demonstrate that the proposed model captures the correct dynamics from multiple environments over four different dynamic systems with improved prediction performance.
arXiv Detail & Related papers (2023-02-12T15:45:50Z) - Differentiable Invariant Causal Discovery [106.87950048845308]
Learning causal structure from observational data is a fundamental challenge in machine learning.
This paper proposes Differentiable Invariant Causal Discovery (DICD) to avoid learning spurious edges and wrong causal directions.
Extensive experiments on synthetic and real-world datasets verify that DICD outperforms state-of-the-art causal discovery methods up to 36% in SHD.
arXiv Detail & Related papers (2022-05-31T09:29:07Z) - Bridging the Gap Between Clean Data Training and Real-World Inference
for Spoken Language Understanding [76.89426311082927]
Existing models are trained on clean data, which causes a textitgap between clean data training and real-world inference.
We propose a method from the perspective of domain adaptation, by which both high- and low-quality samples are embedding into similar vector space.
Experiments on the widely-used dataset, Snips, and large scale in-house dataset (10 million training examples) demonstrate that this method not only outperforms the baseline models on real-world (noisy) corpus but also enhances the robustness, that is, it produces high-quality results under a noisy environment.
arXiv Detail & Related papers (2021-04-13T17:54:33Z) - Evidential Sparsification of Multimodal Latent Spaces in Conditional
Variational Autoencoders [63.46738617561255]
We consider the problem of sparsifying the discrete latent space of a trained conditional variational autoencoder.
We use evidential theory to identify the latent classes that receive direct evidence from a particular input condition and filter out those that do not.
Experiments on diverse tasks, such as image generation and human behavior prediction, demonstrate the effectiveness of our proposed technique.
arXiv Detail & Related papers (2020-10-19T01:27:21Z) - GANs with Variational Entropy Regularizers: Applications in Mitigating
the Mode-Collapse Issue [95.23775347605923]
Building on the success of deep learning, Generative Adversarial Networks (GANs) provide a modern approach to learn a probability distribution from observed samples.
GANs often suffer from the mode collapse issue where the generator fails to capture all existing modes of the input distribution.
We take an information-theoretic approach and maximize a variational lower bound on the entropy of the generated samples to increase their diversity.
arXiv Detail & Related papers (2020-09-24T19:34:37Z) - Invariant Causal Prediction for Block MDPs [106.63346115341862]
Generalization across environments is critical to the successful application of reinforcement learning algorithms to real-world challenges.
We propose a method of invariant prediction to learn model-irrelevance state abstractions (MISA) that generalize to novel observations in the multi-environment setting.
arXiv Detail & Related papers (2020-03-12T21:03:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.