Ranking-aware Continual Learning for LiDAR Place Recognition
- URL: http://arxiv.org/abs/2505.07198v1
- Date: Mon, 12 May 2025 03:06:29 GMT
- Title: Ranking-aware Continual Learning for LiDAR Place Recognition
- Authors: Xufei Wang, Gengxuan Tian, Junqiao Zhao, Siyue Tao, Qiwen Gu, Qiankun Yu, Tiantian Feng,
- Abstract summary: We introduce a continual learning framework for LPR via Knowledge Distillation and Fusion (KDF) to alleviate forgetting.<n>Inspired by the ranking process of place recognition retrieval, we present a ranking-aware knowledge distillation loss.<n>We also introduce a knowledge fusion module to integrate the knowledge of old and new models for LiDAR place recognition.
- Score: 7.769301524248828
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Place recognition plays a significant role in SLAM, robot navigation, and autonomous driving applications. Benefiting from deep learning, the performance of LiDAR place recognition (LPR) has been greatly improved. However, many existing learning-based LPR methods suffer from catastrophic forgetting, which severely harms the performance of LPR on previously trained places after training on a new environment. In this paper, we introduce a continual learning framework for LPR via Knowledge Distillation and Fusion (KDF) to alleviate forgetting. Inspired by the ranking process of place recognition retrieval, we present a ranking-aware knowledge distillation loss that encourages the network to preserve the high-level place recognition knowledge. We also introduce a knowledge fusion module to integrate the knowledge of old and new models for LiDAR place recognition. Our extensive experiments demonstrate that KDF can be applied to different networks to overcome catastrophic forgetting, surpassing the state-of-the-art methods in terms of mean Recall@1 and forgetting score.
Related papers
- Do LLMs Really Forget? Evaluating Unlearning with Knowledge Correlation and Confidence Awareness [44.37155305736321]
Machine unlearning techniques aim to mitigate unintended memorization in large language models (LLMs)<n>We propose a knowledge unlearning evaluation framework that more accurately captures the implicit structure of real-world knowledge.<n>Our framework provides a more realistic and rigorous assessment of unlearning performance.
arXiv Detail & Related papers (2025-06-06T04:35:19Z) - Unveiling Knowledge Utilization Mechanisms in LLM-based Retrieval-Augmented Generation [77.10390725623125]
retrieval-augmented generation (RAG) is widely employed to expand their knowledge scope.<n>Since RAG has shown promise in knowledge-intensive tasks like open-domain question answering, its broader application to complex tasks and intelligent assistants has further advanced its utility.<n>We present a systematic investigation of the intrinsic mechanisms by which RAGs integrate internal (parametric) and external (retrieved) knowledge.
arXiv Detail & Related papers (2025-05-17T13:13:13Z) - Effective LLM Knowledge Learning via Model Generalization [73.16975077770765]
Large language models (LLMs) are trained on enormous documents that contain extensive world knowledge.<n>It is still not well-understood how knowledge is acquired via autoregressive pre-training.<n>In this paper, we focus on understanding and improving LLM knowledge learning.
arXiv Detail & Related papers (2025-03-05T17:56:20Z) - How Do LLMs Acquire New Knowledge? A Knowledge Circuits Perspective on Continual Pre-Training [92.88889953768455]
Large Language Models (LLMs) face a critical gap in understanding how they internalize new knowledge.<n>We identify computational subgraphs that facilitate knowledge storage and processing.
arXiv Detail & Related papers (2025-02-16T16:55:43Z) - Efficient Knowledge Injection in LLMs via Self-Distillation [50.24554628642021]
This paper proposes utilizing prompt distillation to internalize new factual knowledge from free-form documents.<n>We show that prompt distillation outperforms standard supervised fine-tuning and can even surpass RAG.
arXiv Detail & Related papers (2024-12-19T15:44:01Z) - Adaptive Rank, Reduced Forgetting: Knowledge Retention in Continual Learning Vision-Language Models with Dynamic Rank-Selective LoRA [19.982853959240497]
We investigate whether pre-trained knowledge in vision-language models (VLMs) can be retained -- or even enhanced -- in continual learning (CL)<n>We propose a universal and efficient Continual Learning approach for VLM based on Dynamic Rank-Selective LoRA (CoDyRA)
arXiv Detail & Related papers (2024-12-01T23:41:42Z) - Supportiveness-based Knowledge Rewriting for Retrieval-augmented Language Modeling [65.72918416258219]
Supportiveness-based Knowledge Rewriting (SKR) is a robust and pluggable knowledge rewriter inherently optimized for LLM generation.
Based on knowledge supportiveness, we first design a training data curation strategy for our rewriter model.
We then introduce the direct preference optimization (DPO) algorithm to align the generated rewrites to optimal supportiveness.
arXiv Detail & Related papers (2024-06-12T11:52:35Z) - ActiveRAG: Autonomously Knowledge Assimilation and Accommodation through Retrieval-Augmented Agents [49.30553350788524]
Retrieval-Augmented Generation (RAG) enables Large Language Models (LLMs) to leverage external knowledge.
Existing RAG models often treat LLMs as passive recipients of information.
We introduce ActiveRAG, a multi-agent framework that mimics human learning behavior.
arXiv Detail & Related papers (2024-02-21T06:04:53Z) - Forgetting before Learning: Utilizing Parametric Arithmetic for
Knowledge Updating in Large Language Models [53.52344131257681]
We propose a new paradigm for fine-tuning called F-Learning, which employs parametric arithmetic to facilitate the forgetting of old knowledge and learning of new knowledge.
Experimental results on two publicly available datasets demonstrate that our proposed F-Learning can obviously improve the knowledge updating performance of both full fine-tuning and LoRA fine-tuning.
arXiv Detail & Related papers (2023-11-14T09:12:40Z) - CCL: Continual Contrastive Learning for LiDAR Place Recognition [5.025654873456756]
Current deep learning-based methods suffer from poor generalization ability and catastrophic forgetting.
We propose a continual contrastive learning method, named CCL, to tackle the catastrophic forgetting problem.
Our method consistently improves the performance of different methods in different environments outperforming the state-of-the-art continual learning method.
arXiv Detail & Related papers (2023-03-24T12:14:54Z) - Learning with Recoverable Forgetting [77.56338597012927]
Learning wIth Recoverable Forgetting explicitly handles the task- or sample-specific knowledge removal and recovery.
Specifically, LIRF brings in two innovative schemes, namely knowledge deposit and withdrawal.
We conduct experiments on several datasets, and demonstrate that the proposed LIRF strategy yields encouraging results with gratifying generalization capability.
arXiv Detail & Related papers (2022-07-17T16:42:31Z) - Class-Incremental Continual Learning into the eXtended DER-verse [17.90483695137098]
This work aims at assessing and overcoming the pitfalls of our previous proposal Dark Experience Replay (DER)
Inspired by the way our minds constantly rewrite past recollections and set expectations for the future, we endow our model with the abilities to i) revise its replay memory to welcome novel information regarding past data.
We show that the application of these strategies leads to remarkable improvements.
arXiv Detail & Related papers (2022-01-03T17:14:30Z) - Place recognition survey: An update on deep learning approaches [0.6352264764099531]
This paper surveys recent approaches and methods used in place recognition, particularly those based on deep learning.
The contributions of this work are twofold: surveying recent sensors such as 3D LiDARs and RADARs, applied in place recognition.
This survey proceeds by elaborating on the various DL-based works, presenting summaries for each framework.
arXiv Detail & Related papers (2021-06-19T09:17:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.