Lifelong Person Re-Identification via Adaptive Knowledge Accumulation
- URL: http://arxiv.org/abs/2103.12462v1
- Date: Tue, 23 Mar 2021 11:30:38 GMT
- Title: Lifelong Person Re-Identification via Adaptive Knowledge Accumulation
- Authors: Nan Pu, Wei Chen, Yu Liu, Erwin M. Bakker and Michael S. Lew
- Abstract summary: Lifelong person re-identification (LReID) enables to learn continuously across multiple domains.
We design an Adaptive Knowledge Accumulation framework that is endowed with two crucial abilities: knowledge representation and knowledge operation.
Our method alleviates catastrophic forgetting on seen domains and demonstrates the ability to generalize to unseen domains.
- Score: 18.4671957106297
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Person ReID methods always learn through a stationary domain that is fixed by
the choice of a given dataset. In many contexts (e.g., lifelong learning),
those methods are ineffective because the domain is continually changing in
which case incremental learning over multiple domains is required potentially.
In this work we explore a new and challenging ReID task, namely lifelong person
re-identification (LReID), which enables to learn continuously across multiple
domains and even generalise on new and unseen domains. Following the cognitive
processes in the human brain, we design an Adaptive Knowledge Accumulation
(AKA) framework that is endowed with two crucial abilities: knowledge
representation and knowledge operation. Our method alleviates catastrophic
forgetting on seen domains and demonstrates the ability to generalize to unseen
domains. Correspondingly, we also provide a new and large-scale benchmark for
LReID. Extensive experiments demonstrate our method outperforms other
competitors by a margin of 5.8% mAP in generalising evaluation.
Related papers
- Auto-selected Knowledge Adapters for Lifelong Person Re-identification [54.42307214981537]
Lifelong Person Re-Identification requires systems to continually learn from non-overlapping datasets across different times and locations.
Existing approaches, either rehearsal-free or rehearsal-based, still suffer from the problem of catastrophic forgetting.
We introduce a novel framework AdalReID, that adopts knowledge adapters and a parameter-free auto-selection mechanism for lifelong learning.
arXiv Detail & Related papers (2024-05-29T11:42:02Z) - Knowledge-augmented Deep Learning and Its Applications: A Survey [60.221292040710885]
knowledge-augmented deep learning (KADL) aims to identify domain knowledge and integrate it into deep models for data-efficient, generalizable, and interpretable deep learning.
This survey subsumes existing works and offers a bird's-eye view of research in the general area of knowledge-augmented deep learning.
arXiv Detail & Related papers (2022-11-30T03:44:15Z) - Learning with Style: Continual Semantic Segmentation Across Tasks and
Domains [25.137859989323537]
Domain adaptation and class incremental learning deal with domain and task variability separately, whereas their unified solution is still an open problem.
We tackle both facets of the problem together, taking into account the semantic shift within both input and label spaces.
We show how the proposed method outperforms existing approaches, which prove to be ill-equipped to deal with continual semantic segmentation under both task and domain shift.
arXiv Detail & Related papers (2022-10-13T13:24:34Z) - Forget Less, Count Better: A Domain-Incremental Self-Distillation
Learning Benchmark for Lifelong Crowd Counting [51.44987756859706]
Off-the-shelf methods have some drawbacks to handle multiple domains.
Lifelong Crowd Counting aims at alleviating the catastrophic forgetting and improving the generalization ability.
arXiv Detail & Related papers (2022-05-06T15:37:56Z) - Unsupervised Lifelong Person Re-identification via Contrastive Rehearsal [7.983523975392535]
Unsupervised lifelong person ReID focuses on continuously conducting unsupervised domain adaptation on new domains.
We set an image-to-image similarity constraint between old and new models to regularize the model updates in a way that suits old knowledge.
Our proposed lifelong method achieves strong generalizability, which significantly outperforms previous lifelong methods on both seen and unseen domains.
arXiv Detail & Related papers (2022-03-12T15:44:08Z) - On Generalizing Beyond Domains in Cross-Domain Continual Learning [91.56748415975683]
Deep neural networks often suffer from catastrophic forgetting of previously learned knowledge after learning a new task.
Our proposed approach learns new tasks under domain shift with accuracy boosts up to 10% on challenging datasets such as DomainNet and OfficeHome.
arXiv Detail & Related papers (2022-03-08T09:57:48Z) - Transferability in Deep Learning: A Survey [80.67296873915176]
The ability to acquire and reuse knowledge is known as transferability in deep learning.
We present this survey to connect different isolated areas in deep learning with their relation to transferability.
We implement a benchmark and an open-source library, enabling a fair evaluation of deep learning methods in terms of transferability.
arXiv Detail & Related papers (2022-01-15T15:03:17Z) - Multiple Domain Experts Collaborative Learning: Multi-Source Domain
Generalization For Person Re-Identification [41.923753462539736]
We propose a novel training framework, named Multiple Domain Experts Collaborative Learning (MD-ExCo)
The MD-ExCo consists of a universal expert and several domain experts.
Experiments on DG-ReID benchmarks show that our MD-ExCo outperforms the state-of-the-art methods by a large margin.
arXiv Detail & Related papers (2021-05-26T06:38:23Z) - Domain Adaption for Knowledge Tracing [65.86619804954283]
We propose a novel adaptable framework, namely knowledge tracing (AKT) to address the DAKT problem.
For the first aspect, we incorporate the educational characteristics (e.g., slip, guess, question texts) based on the deep knowledge tracing (DKT) to obtain a good performed knowledge tracing model.
For the second aspect, we propose and adopt three domain adaptation processes. First, we pre-train an auto-encoder to select useful source instances for target model training.
arXiv Detail & Related papers (2020-01-14T15:04:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.