Learning Multiplicative Interactions with Bayesian Neural Networks for
Visual-Inertial Odometry
- URL: http://arxiv.org/abs/2007.07630v1
- Date: Wed, 15 Jul 2020 11:39:29 GMT
- Title: Learning Multiplicative Interactions with Bayesian Neural Networks for
Visual-Inertial Odometry
- Authors: Kashmira Shinde, Jongseok Lee, Matthias Humt, Aydin Sezgin, Rudolph
Triebel
- Abstract summary: This paper presents an end-to-end multi-modal learning approach for Visual-Inertial Odometry (VIO)
It is specifically designed to exploit sensor complementarity in the light of sensor degradation scenarios.
- Score: 44.209301916028124
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper presents an end-to-end multi-modal learning approach for monocular
Visual-Inertial Odometry (VIO), which is specifically designed to exploit
sensor complementarity in the light of sensor degradation scenarios. The
proposed network makes use of a multi-head self-attention mechanism that learns
multiplicative interactions between multiple streams of information. Another
design feature of our approach is the incorporation of the model uncertainty
using scalable Laplace Approximation. We evaluate the performance of the
proposed approach by comparing it against the end-to-end state-of-the-art
methods on the KITTI dataset and show that it achieves superior performance.
Importantly, our work thereby provides an empirical evidence that learning
multiplicative interactions can result in a powerful inductive bias for
increased robustness to sensor failures.
Related papers
- Deep Learning Through A Telescoping Lens: A Simple Model Provides Empirical Insights On Grokking, Gradient Boosting & Beyond [61.18736646013446]
In pursuit of a deeper understanding of its surprising behaviors, we investigate the utility of a simple yet accurate model of a trained neural network.
Across three case studies, we illustrate how it can be applied to derive new empirical insights on a diverse range of prominent phenomena.
arXiv Detail & Related papers (2024-10-31T22:54:34Z) - Multimodal Information Bottleneck for Deep Reinforcement Learning with Multiple Sensors [10.454194186065195]
Reinforcement learning has achieved promising results on robotic control tasks but struggles to leverage information effectively.
Recent works construct auxiliary losses based on reconstruction or mutual information to extract joint representations from multiple sensory inputs.
We argue that compressing information in the learned joint representations about raw multimodal observations is helpful.
arXiv Detail & Related papers (2024-10-23T04:32:37Z) - The Risk of Federated Learning to Skew Fine-Tuning Features and
Underperform Out-of-Distribution Robustness [50.52507648690234]
Federated learning has the risk of skewing fine-tuning features and compromising the robustness of the model.
We introduce three robustness indicators and conduct experiments across diverse robust datasets.
Our approach markedly enhances the robustness across diverse scenarios, encompassing various parameter-efficient fine-tuning methods.
arXiv Detail & Related papers (2024-01-25T09:18:51Z) - Multimodal Visual-Tactile Representation Learning through
Self-Supervised Contrastive Pre-Training [0.850206009406913]
MViTac is a novel methodology that leverages contrastive learning to integrate vision and touch sensations in a self-supervised fashion.
By availing both sensory inputs, MViTac leverages intra and inter-modality losses for learning representations, resulting in enhanced material property classification and more adept grasping prediction.
arXiv Detail & Related papers (2024-01-22T15:11:57Z) - Understanding Data Augmentation from a Robustness Perspective [10.063624819905508]
Data augmentation stands out as a pivotal technique to amplify model robustness.
This manuscript takes both a theoretical and empirical approach to understanding the phenomenon.
Our empirical evaluations dissect the intricate mechanisms of emblematic data augmentation strategies.
These insights provide a novel lens through which we can re-evaluate model safety and robustness in visual recognition tasks.
arXiv Detail & Related papers (2023-09-07T10:54:56Z) - Regularization Through Simultaneous Learning: A Case Study on Plant
Classification [0.0]
This paper introduces Simultaneous Learning, a regularization approach drawing on principles of Transfer Learning and Multi-task Learning.
We leverage auxiliary datasets with the target dataset, the UFOP-HVD, to facilitate simultaneous classification guided by a customized loss function.
Remarkably, our approach demonstrates superior performance over models without regularization.
arXiv Detail & Related papers (2023-05-22T19:44:57Z) - Task-Free Continual Learning via Online Discrepancy Distance Learning [11.540150938141034]
This paper develops a new theoretical analysis framework which provides generalization bounds based on the discrepancy distance between the visited samples and the entire information made available for training the model.
Inspired by this theoretical model, we propose a new approach enabled by the dynamic component expansion mechanism for a mixture model, namely the Online Discrepancy Distance Learning (ODDL)
arXiv Detail & Related papers (2022-10-12T20:44:09Z) - MMLatch: Bottom-up Top-down Fusion for Multimodal Sentiment Analysis [84.7287684402508]
Current deep learning approaches for multimodal fusion rely on bottom-up fusion of high and mid-level latent modality representations.
Models of human perception highlight the importance of top-down fusion, where high-level representations affect the way sensory inputs are perceived.
We propose a neural architecture that captures top-down cross-modal interactions, using a feedback mechanism in the forward pass during network training.
arXiv Detail & Related papers (2022-01-24T17:48:04Z) - Variational Structured Attention Networks for Deep Visual Representation
Learning [49.80498066480928]
We propose a unified deep framework to jointly learn both spatial attention maps and channel attention in a principled manner.
Specifically, we integrate the estimation and the interaction of the attentions within a probabilistic representation learning framework.
We implement the inference rules within the neural network, thus allowing for end-to-end learning of the probabilistic and the CNN front-end parameters.
arXiv Detail & Related papers (2021-03-05T07:37:24Z) - Semantics-aware Adaptive Knowledge Distillation for Sensor-to-Vision
Action Recognition [131.6328804788164]
We propose a framework, named Semantics-aware Adaptive Knowledge Distillation Networks (SAKDN), to enhance action recognition in vision-sensor modality (videos)
The SAKDN uses multiple wearable-sensors as teacher modalities and uses RGB videos as student modality.
arXiv Detail & Related papers (2020-09-01T03:38:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.