AirLoop: Lifelong Loop Closure Detection
- URL: http://arxiv.org/abs/2109.08975v1
- Date: Sat, 18 Sep 2021 17:28:47 GMT
- Title: AirLoop: Lifelong Loop Closure Detection
- Authors: Dasong Gao, Chen Wang, Sebastian Scherer
- Abstract summary: AirLoop is a method that leverages techniques from lifelong learning to minimize forgetting when training loop closure detection models incrementally.
We experimentally demonstrate the effectiveness of AirLoop on TartanAir, Nordland, and RobotCar datasets.
- Score: 5.3759730885842725
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Loop closure detection is an important building block that ensures the
accuracy and robustness of simultaneous localization and mapping (SLAM)
systems. Due to their generalization ability, CNN-based approaches have
received increasing attention. Although they normally benefit from training on
datasets that are diverse and reflective of the environments, new environments
often emerge after the model is deployed. It is therefore desirable to
incorporate the data newly collected during operation for incremental learning.
Nevertheless, simply finetuning the model on new data is infeasible since it
may cause the model's performance on previously learned data to degrade over
time, which is also known as the problem of catastrophic forgetting. In this
paper, we present AirLoop, a method that leverages techniques from lifelong
learning to minimize forgetting when training loop closure detection models
incrementally. We experimentally demonstrate the effectiveness of AirLoop on
TartanAir, Nordland, and RobotCar datasets. To the best of our knowledge,
AirLoop is one of the first works to achieve lifelong learning of deep loop
closure detectors.
Related papers
- Diffusion-Driven Data Replay: A Novel Approach to Combat Forgetting in Federated Class Continual Learning [13.836798036474143]
Key challenge in Federated Class Continual Learning is catastrophic forgetting.
We propose a novel method of data replay based on diffusion models.
Our method significantly outperforms existing baselines.
arXiv Detail & Related papers (2024-09-02T10:07:24Z) - AIR: Analytic Imbalance Rectifier for Continual Learning [16.917778190250353]
Continual learning enables AI models to learn new data sequentially without retraining in real-world scenarios.
Most existing methods assume the training data are balanced, aiming to reduce the problem that models tend to forget previously generated data.
We propose an analytic imbalance algorithm (AIR) to solve this problem.
arXiv Detail & Related papers (2024-08-19T18:42:00Z) - Beyond Prompt Learning: Continual Adapter for Efficient Rehearsal-Free Continual Learning [22.13331870720021]
We propose a beyond prompt learning approach to the RFCL task, called Continual Adapter (C-ADA)
C-ADA flexibly extends specific weights in CAL to learn new knowledge for each task and freezes old weights to preserve prior knowledge.
Our approach achieves significantly improved performance and training speed, outperforming the current state-of-the-art (SOTA) method.
arXiv Detail & Related papers (2024-07-14T17:40:40Z) - Adaptive Rentention & Correction for Continual Learning [114.5656325514408]
A common problem in continual learning is the classification layer's bias towards the most recent task.
We name our approach Adaptive Retention & Correction (ARC)
ARC achieves an average performance increase of 2.7% and 2.6% on the CIFAR-100 and Imagenet-R datasets.
arXiv Detail & Related papers (2024-05-23T08:43:09Z) - Enhancing Consistency and Mitigating Bias: A Data Replay Approach for
Incremental Learning [100.7407460674153]
Deep learning systems are prone to catastrophic forgetting when learning from a sequence of tasks.
To mitigate the problem, a line of methods propose to replay the data of experienced tasks when learning new tasks.
However, it is not expected in practice considering the memory constraint or data privacy issue.
As a replacement, data-free data replay methods are proposed by inverting samples from the classification model.
arXiv Detail & Related papers (2024-01-12T12:51:12Z) - Learn to Unlearn for Deep Neural Networks: Minimizing Unlearning
Interference with Gradient Projection [56.292071534857946]
Recent data-privacy laws have sparked interest in machine unlearning.
Challenge is to discard information about the forget'' data without altering knowledge about remaining dataset.
We adopt a projected-gradient based learning method, named as Projected-Gradient Unlearning (PGU)
We provide empirically evidence to demonstrate that our unlearning method can produce models that behave similar to models retrained from scratch across various metrics even when the training dataset is no longer accessible.
arXiv Detail & Related papers (2023-12-07T07:17:24Z) - LARA: A Light and Anti-overfitting Retraining Approach for Unsupervised
Time Series Anomaly Detection [49.52429991848581]
We propose a Light and Anti-overfitting Retraining Approach (LARA) for deep variational auto-encoder based time series anomaly detection methods (VAEs)
This work aims to make three novel contributions: 1) the retraining process is formulated as a convex problem and can converge at a fast rate as well as prevent overfitting; 2) designing a ruminate block, which leverages the historical data without the need to store them; and 3) mathematically proving that when fine-tuning the latent vector and reconstructed data, the linear formations can achieve the least adjusting errors between the ground truths and the fine-tuned ones.
arXiv Detail & Related papers (2023-10-09T12:36:16Z) - KL-divergence Based Deep Learning for Discrete Time Model [12.165326681174408]
We develop a Kullback-Leibler-based (KL) deep learning procedure to integrate external survival prediction models with newly collected time-to-event data.
Time-dependent KL discrimination information is utilized to measure the discrepancy between the external and internal data.
arXiv Detail & Related papers (2022-08-10T01:46:26Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - Do Gradient Inversion Attacks Make Federated Learning Unsafe? [70.0231254112197]
Federated learning (FL) allows the collaborative training of AI models without needing to share raw data.
Recent works on the inversion of deep neural networks from model gradients raised concerns about the security of FL in preventing the leakage of training data.
In this work, we show that these attacks presented in the literature are impractical in real FL use-cases and provide a new baseline attack.
arXiv Detail & Related papers (2022-02-14T18:33:12Z) - An Incremental Clustering Method for Anomaly Detection in Flight Data [0.0]
We propose a novel incremental anomaly detection method based on Gaussian Mixture Model (GMM)
It is a probabilistic clustering model of flight operations that can incrementally update its clusters based on new data.
Preliminary results indicate that the incremental learning scheme is effective in dealing with dynamically growing data in flight data analytics.
arXiv Detail & Related papers (2020-05-20T06:58:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.