CCL: Continual Contrastive Learning for LiDAR Place Recognition
- URL: http://arxiv.org/abs/2303.13952v2
- Date: Wed, 14 Jun 2023 12:22:07 GMT
- Title: CCL: Continual Contrastive Learning for LiDAR Place Recognition
- Authors: Jiafeng Cui, Xieyuanli Chen
- Abstract summary: Current deep learning-based methods suffer from poor generalization ability and catastrophic forgetting.
We propose a continual contrastive learning method, named CCL, to tackle the catastrophic forgetting problem.
Our method consistently improves the performance of different methods in different environments outperforming the state-of-the-art continual learning method.
- Score: 5.025654873456756
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Place recognition is an essential and challenging task in loop closing and
global localization for robotics and autonomous driving applications.
Benefiting from the recent advances in deep learning techniques, the
performance of LiDAR place recognition (LPR) has been greatly improved.
However, current deep learning-based methods suffer from two major problems:
poor generalization ability and catastrophic forgetting. In this paper, we
propose a continual contrastive learning method, named CCL, to tackle the
catastrophic forgetting problem and generally improve the robustness of LPR
approaches. Our CCL constructs a contrastive feature pool and utilizes
contrastive loss to train more transferable representations of places. When
transferred into new environments, our CCL continuously reviews the contrastive
memory bank and applies a distribution-based knowledge distillation to maintain
the retrieval ability of the past data while continually learning to recognize
new places from the new data. We thoroughly evaluate our approach on Oxford,
MulRan, and PNV datasets using three different LPR methods. The experimental
results show that our CCL consistently improves the performance of different
methods in different environments outperforming the state-of-the-art continual
learning method. The implementation of our method has been released at
https://github.com/cloudcjf/CCL.
Related papers
- Temporal-Difference Variational Continual Learning [89.32940051152782]
A crucial capability of Machine Learning models in real-world applications is the ability to continuously learn new tasks.
In Continual Learning settings, models often struggle to balance learning new tasks with retaining previous knowledge.
We propose new learning objectives that integrate the regularization effects of multiple previous posterior estimations.
arXiv Detail & Related papers (2024-10-10T10:58:41Z) - DELTA: Decoupling Long-Tailed Online Continual Learning [7.507868991415516]
Long-Tailed Online Continual Learning (LTOCL) aims to learn new tasks from sequentially arriving class-imbalanced data streams.
We present DELTA, a decoupled learning approach designed to enhance learning representations.
We demonstrate that DELTA improves the capacity for incremental learning, surpassing existing OCL methods.
arXiv Detail & Related papers (2024-04-06T02:33:04Z) - RecDCL: Dual Contrastive Learning for Recommendation [65.6236784430981]
We propose a dual contrastive learning recommendation framework -- RecDCL.
In RecDCL, the FCL objective is designed to eliminate redundant solutions on user-item positive pairs.
The BCL objective is utilized to generate contrastive embeddings on output vectors for enhancing the robustness of the representations.
arXiv Detail & Related papers (2024-01-28T11:51:09Z) - Continual Learning with Dirichlet Generative-based Rehearsal [22.314195832409755]
We present Dirichlet Continual Learning, a novel generative-based rehearsal strategy for task-oriented dialogue systems.
We also introduce Jensen-Shannon Knowledge Distillation (JSKD), a robust logit-based knowledge distillation method.
Our experiments confirm the efficacy of our approach in both intent detection and slot-filling tasks, outperforming state-of-the-art methods.
arXiv Detail & Related papers (2023-09-13T12:30:03Z) - CCE: Sample Efficient Sparse Reward Policy Learning for Robotic Navigation via Confidence-Controlled Exploration [72.24964965882783]
Confidence-Controlled Exploration (CCE) is designed to enhance the training sample efficiency of reinforcement learning algorithms for sparse reward settings such as robot navigation.
CCE is based on a novel relationship we provide between gradient estimation and policy entropy.
We demonstrate through simulated and real-world experiments that CCE outperforms conventional methods that employ constant trajectory lengths and entropy regularization.
arXiv Detail & Related papers (2023-06-09T18:45:15Z) - On the Effectiveness of Equivariant Regularization for Robust Online
Continual Learning [17.995662644298974]
Continual Learning (CL) approaches seek to bridge this gap by facilitating the transfer of knowledge to both previous tasks and future ones.
Recent research has shown that self-supervision can produce versatile models that can generalize well to diverse downstream tasks.
We propose Continual Learning via Equivariant Regularization (CLER), an OCL approach that leverages equivariant tasks for self-supervision.
arXiv Detail & Related papers (2023-05-05T16:10:31Z) - Beyond Supervised Continual Learning: a Review [69.9674326582747]
Continual Learning (CL) is a flavor of machine learning where the usual assumption of stationary data distribution is relaxed or omitted.
Changes in the data distribution can cause the so-called catastrophic forgetting (CF) effect: an abrupt loss of previous knowledge.
This article reviews literature that study CL in other settings, such as learning with reduced supervision, fully unsupervised learning, and reinforcement learning.
arXiv Detail & Related papers (2022-08-30T14:44:41Z) - A Study of Continual Learning Methods for Q-Learning [78.6363825307044]
We present an empirical study on the use of continual learning (CL) methods in a reinforcement learning (RL) scenario.
Our results show that dedicated CL methods can significantly improve learning when compared to the baseline technique of "experience replay"
arXiv Detail & Related papers (2022-06-08T14:51:52Z) - vCLIMB: A Novel Video Class Incremental Learning Benchmark [53.90485760679411]
We introduce vCLIMB, a novel video continual learning benchmark.
vCLIMB is a standardized test-bed to analyze catastrophic forgetting of deep models in video continual learning.
We propose a temporal consistency regularization that can be applied on top of memory-based continual learning methods.
arXiv Detail & Related papers (2022-01-23T22:14:17Z) - Online Continual Learning under Extreme Memory Constraints [40.80045285324969]
We introduce the novel problem of Memory-Constrained Online Continual Learning (MC-OCL)
MC-OCL imposes strict constraints on the memory overhead that a possible algorithm can use to avoid catastrophic forgetting.
We propose an algorithmic solution to MC-OCL: Batch-level Distillation (BLD), a regularization-based CL approach.
arXiv Detail & Related papers (2020-08-04T13:25:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.