Lifelong Person Re-Identification via Knowledge Refreshing and
Consolidation
- URL: http://arxiv.org/abs/2211.16201v1
- Date: Tue, 29 Nov 2022 13:39:45 GMT
- Title: Lifelong Person Re-Identification via Knowledge Refreshing and
Consolidation
- Authors: Chunlin Yu, Ye Shi, Zimo Liu, Shenghua Gao, Jingya Wang
- Abstract summary: Key challenge for Lifelong person re-identification (LReID) is how to incrementally preserve old knowledge and gradually add new capabilities to the system.
Inspired by the biological process of human cognition where the somatosensory neocortex and the hippocampus work together in memory consolidation, we formulated a model called Knowledge Refreshing and Consolidation (KRC)
KRC achieves both positive forward and backward transfer. More specifically, a knowledge refreshing scheme is incorporated with the knowledge rehearsal mechanism to enable bi-directional knowledge transfer.
- Score: 35.43406281230279
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Lifelong person re-identification (LReID) is in significant demand for
real-world development as a large amount of ReID data is captured from diverse
locations over time and cannot be accessed at once inherently. However, a key
challenge for LReID is how to incrementally preserve old knowledge and
gradually add new capabilities to the system. Unlike most existing LReID
methods, which mainly focus on dealing with catastrophic forgetting, our focus
is on a more challenging problem, which is, not only trying to reduce the
forgetting on old tasks but also aiming to improve the model performance on
both new and old tasks during the lifelong learning process. Inspired by the
biological process of human cognition where the somatosensory neocortex and the
hippocampus work together in memory consolidation, we formulated a model called
Knowledge Refreshing and Consolidation (KRC) that achieves both positive
forward and backward transfer. More specifically, a knowledge refreshing scheme
is incorporated with the knowledge rehearsal mechanism to enable bi-directional
knowledge transfer by introducing a dynamic memory model and an adaptive
working model. Moreover, a knowledge consolidation scheme operating on the dual
space further improves model stability over the long term. Extensive
evaluations show KRC's superiority over the state-of-the-art LReID methods on
challenging pedestrian benchmarks.
Related papers
- Temporal-Difference Variational Continual Learning [89.32940051152782]
A crucial capability of Machine Learning models in real-world applications is the ability to continuously learn new tasks.
In Continual Learning settings, models often struggle to balance learning new tasks with retaining previous knowledge.
We propose new learning objectives that integrate the regularization effects of multiple previous posterior estimations.
arXiv Detail & Related papers (2024-10-10T10:58:41Z) - Distribution Aligned Semantics Adaption for Lifelong Person Re-Identification [43.32960398077722]
Re-ID systems need to be adaptable to changes in space and time.
Lifelong person Re-IDentification (LReID) methods rely on replaying exemplars from old domains and applying knowledge distillation in logits with old models.
We argue that a Re-ID model trained on diverse and challenging pedestrian images at a large scale can acquire robust and general human semantic knowledge.
arXiv Detail & Related papers (2024-05-30T05:15:38Z) - Auto-selected Knowledge Adapters for Lifelong Person Re-identification [54.42307214981537]
Lifelong Person Re-Identification requires systems to continually learn from non-overlapping datasets across different times and locations.
Existing approaches, either rehearsal-free or rehearsal-based, still suffer from the problem of catastrophic forgetting.
We introduce a novel framework AdalReID, that adopts knowledge adapters and a parameter-free auto-selection mechanism for lifelong learning.
arXiv Detail & Related papers (2024-05-29T11:42:02Z) - Brain-Inspired Continual Learning-Robust Feature Distillation and Re-Consolidation for Class Incremental Learning [0.0]
We introduce a novel framework comprising two core concepts: feature distillation and re-consolidation.
Our framework, named Robust Rehearsal, addresses the challenge of catastrophic forgetting inherent in continual learning systems.
Experiments conducted on CIFAR10, CIFAR100, and real-world helicopter attitude datasets showcase the superior performance of CL models trained with Robust Rehearsal.
arXiv Detail & Related papers (2024-04-22T21:30:11Z) - Recall-Oriented Continual Learning with Generative Adversarial
Meta-Model [5.710971447109951]
We propose a recall-oriented continual learning framework to address the stability-plasticity dilemma.
Inspired by the human brain's ability to separate the mechanisms responsible for stability and plasticity, our framework consists of a two-level architecture.
We show that our framework not only effectively learns new knowledge without any disruption but also achieves high stability of previous knowledge.
arXiv Detail & Related papers (2024-03-05T16:08:59Z) - Continual Learning via Manifold Expansion Replay [36.27348867557826]
Catastrophic forgetting is a major challenge to continual learning.
We propose a novel replay strategy called Replay Manifold Expansion (MaER)
We show that the proposed method significantly improves the accuracy in continual learning setup, outperforming the state of the arts.
arXiv Detail & Related papers (2023-10-12T05:09:27Z) - Adaptively Integrated Knowledge Distillation and Prediction Uncertainty
for Continual Learning [71.43841235954453]
Current deep learning models often suffer from catastrophic forgetting of old knowledge when continually learning new knowledge.
Existing strategies to alleviate this issue often fix the trade-off between keeping old knowledge (stability) and learning new knowledge (plasticity)
arXiv Detail & Related papers (2023-01-18T05:36:06Z) - Beyond Not-Forgetting: Continual Learning with Backward Knowledge
Transfer [39.99577526417276]
In continual learning (CL) an agent can improve the learning performance of both a new task and old' tasks.
Most existing CL methods focus on addressing catastrophic forgetting in neural networks by minimizing the modification of the learnt model for old tasks.
We propose a new CL method with Backward knowlEdge tRansfer (CUBER) for a fixed capacity neural network without data replay.
arXiv Detail & Related papers (2022-11-01T23:55:51Z) - Anti-Retroactive Interference for Lifelong Learning [65.50683752919089]
We design a paradigm for lifelong learning based on meta-learning and associative mechanism of the brain.
It tackles the problem from two aspects: extracting knowledge and memorizing knowledge.
It is theoretically analyzed that the proposed learning paradigm can make the models of different tasks converge to the same optimum.
arXiv Detail & Related papers (2022-08-27T09:27:36Z) - Reducing Catastrophic Forgetting in Self Organizing Maps with
Internally-Induced Generative Replay [67.50637511633212]
A lifelong learning agent is able to continually learn from potentially infinite streams of pattern sensory data.
One major historic difficulty in building agents that adapt is that neural systems struggle to retain previously-acquired knowledge when learning from new samples.
This problem is known as catastrophic forgetting (interference) and remains an unsolved problem in the domain of machine learning to this day.
arXiv Detail & Related papers (2021-12-09T07:11:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.