Differentiable SLAM Helps Deep Learning-based LiDAR Perception Tasks
- URL: http://arxiv.org/abs/2309.09206v1
- Date: Sun, 17 Sep 2023 08:24:16 GMT
- Title: Differentiable SLAM Helps Deep Learning-based LiDAR Perception Tasks
- Authors: Prashant Kumar, Dheeraj Vattikonda, Vedang Bhupesh Shenvi Nadkarni,
Erqun Dong, Sabyasachi Sahoo
- Abstract summary: We investigate a new paradigm that uses differentiable SLAM architectures in a self-supervised manner to train end-to-end deep learning models in various LiDAR based applications.
We demonstrate that this new paradigm of using SLAM Loss signal while training LiDAR based models can be easily adopted by the community.
- Score: 2.753469462596694
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We investigate a new paradigm that uses differentiable SLAM architectures in
a self-supervised manner to train end-to-end deep learning models in various
LiDAR based applications. To the best of our knowledge there does not exist any
work that leverages SLAM as a training signal for deep learning based models.
We explore new ways to improve the efficiency, robustness, and adaptability of
LiDAR systems with deep learning techniques. We focus on the potential benefits
of differentiable SLAM architectures for improving performance of deep learning
tasks such as classification, regression as well as SLAM. Our experimental
results demonstrate a non-trivial increase in the performance of two deep
learning applications - Ground Level Estimation and Dynamic to Static LiDAR
Translation, when used with differentiable SLAM architectures. Overall, our
findings provide important insights that enhance the performance of LiDAR based
navigation systems. We demonstrate that this new paradigm of using SLAM Loss
signal while training LiDAR based models can be easily adopted by the
community.
Related papers
- Insights from the Inverse: Reconstructing LLM Training Goals Through Inverse RL [7.988692259455583]
Large language models (LLMs) trained with Reinforcement Learning from Human Feedback have demonstrated remarkable capabilities, but their underlying reward functions and decision-making processes remain opaque.
This paper introduces a novel approach to interpreting LLMs by applying inverse reinforcement learning (IRL) to recover their implicit reward functions.
We conduct experiments on toxicity-aligned LLMs of varying sizes, extracting reward models that achieve up to 80.40% accuracy in predicting human preferences.
arXiv Detail & Related papers (2024-10-16T12:14:25Z) - Learn To Learn More Precisely [30.825058308218047]
"Learn to learn more precisely" aims to make the model learn precise target knowledge from data.
We propose a simple and effective meta-learning framework named Meta Self-Distillation(MSD) to maximize the consistency of learned knowledge.
MSD exhibits remarkable performance in few-shot classification tasks in both standard and augmented scenarios.
arXiv Detail & Related papers (2024-08-08T17:01:26Z) - CoMMIT: Coordinated Instruction Tuning for Multimodal Large Language Models [68.64605538559312]
In this paper, we analyze the MLLM instruction tuning from both theoretical and empirical perspectives.
Inspired by our findings, we propose a measurement to quantitatively evaluate the learning balance.
In addition, we introduce an auxiliary loss regularization method to promote updating of the generation distribution of MLLMs.
arXiv Detail & Related papers (2024-07-29T23:18:55Z) - LLMs-as-Instructors: Learning from Errors Toward Automating Model Improvement [93.38736019287224]
"LLMs-as-Instructors" framework autonomously enhances the training of smaller target models.
Inspired by the theory of "Learning from Errors", this framework employs an instructor LLM to meticulously analyze the specific errors within a target model.
Within this framework, we implement two strategies: "Learning from Error," which focuses solely on incorrect responses to tailor training data, and "Learning from Error by Contrast", which uses contrastive learning to analyze both correct and incorrect responses for a deeper understanding of errors.
arXiv Detail & Related papers (2024-06-29T17:16:04Z) - Automatic Curriculum Learning with Gradient Reward Signals [0.0]
We introduce a framework where the teacher model, utilizing the gradient norm information of a student model, dynamically adapts the learning curriculum.
We analyze how gradient norm rewards influence the teacher's ability to craft challenging yet achievable learning sequences, ultimately enhancing the student's performance.
arXiv Detail & Related papers (2023-12-21T04:19:43Z) - Forgetting before Learning: Utilizing Parametric Arithmetic for
Knowledge Updating in Large Language Models [53.52344131257681]
We propose a new paradigm for fine-tuning called F-Learning, which employs parametric arithmetic to facilitate the forgetting of old knowledge and learning of new knowledge.
Experimental results on two publicly available datasets demonstrate that our proposed F-Learning can obviously improve the knowledge updating performance of both full fine-tuning and LoRA fine-tuning.
arXiv Detail & Related papers (2023-11-14T09:12:40Z) - Reinforcement Learning for Topic Models [3.42658286826597]
We apply reinforcement learning techniques to topic modeling by replacing the variational autoencoder in ProdLDA with a continuous action space reinforcement learning policy.
We introduce several modifications: modernize the neural network architecture, weight the ELBO loss, use contextual embeddings, and monitor the learning process via computing topic diversity and coherence.
arXiv Detail & Related papers (2023-05-08T16:41:08Z) - TRAIL: Near-Optimal Imitation Learning with Suboptimal Data [100.83688818427915]
We present training objectives that use offline datasets to learn a factored transition model.
Our theoretical analysis shows that the learned latent action space can boost the sample-efficiency of downstream imitation learning.
To learn the latent action space in practice, we propose TRAIL (Transition-Reparametrized Actions for Imitation Learning), an algorithm that learns an energy-based transition model.
arXiv Detail & Related papers (2021-10-27T21:05:00Z) - LIFT-SLAM: a deep-learning feature-based monocular visual SLAM method [0.0]
We propose to combine the potential of deep learning-based feature descriptors with the traditional geometry-based VSLAM.
Experiments conducted on KITTI and Euroc datasets show that deep learning can be used to improve the performance of traditional VSLAM systems.
arXiv Detail & Related papers (2021-03-31T20:35:10Z) - Towards Interpretable Deep Learning Models for Knowledge Tracing [62.75876617721375]
We propose to adopt the post-hoc method to tackle the interpretability issue for deep learning based knowledge tracing (DLKT) models.
Specifically, we focus on applying the layer-wise relevance propagation (LRP) method to interpret RNN-based DLKT model.
Experiment results show the feasibility using the LRP method for interpreting the DLKT model's predictions.
arXiv Detail & Related papers (2020-05-13T04:03:21Z) - Learning to Explore using Active Neural SLAM [99.42064696897533]
This work presents a modular and hierarchical approach to learn policies for exploring 3D environments.
The proposed model can also be easily transferred to the PointGoal task and was the winning entry of the CVPR 2019 Habitat PointGoal Navigation Challenge.
arXiv Detail & Related papers (2020-04-10T17:57:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.