Unsupervised Lifelong Person Re-identification via Contrastive Rehearsal
- URL: http://arxiv.org/abs/2203.06468v1
- Date: Sat, 12 Mar 2022 15:44:08 GMT
- Title: Unsupervised Lifelong Person Re-identification via Contrastive Rehearsal
- Authors: Hao Chen, Benoit Lagadec, Francois Bremond
- Abstract summary: Unsupervised lifelong person ReID focuses on continuously conducting unsupervised domain adaptation on new domains.
We set an image-to-image similarity constraint between old and new models to regularize the model updates in a way that suits old knowledge.
Our proposed lifelong method achieves strong generalizability, which significantly outperforms previous lifelong methods on both seen and unseen domains.
- Score: 7.983523975392535
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Existing unsupervised person re-identification (ReID) methods focus on
adapting a model trained on a source domain to a fixed target domain. However,
an adapted ReID model usually only works well on a certain target domain, but
can hardly memorize the source domain knowledge and generalize to upcoming
unseen data. In this paper, we propose unsupervised lifelong person ReID, which
focuses on continuously conducting unsupervised domain adaptation on new
domains without forgetting the knowledge learnt from old domains. To tackle
unsupervised lifelong ReID, we conduct a contrastive rehearsal on a small
number of stored old samples while sequentially adapting to new domains. We
further set an image-to-image similarity constraint between old and new models
to regularize the model updates in a way that suits old knowledge. We
sequentially train our model on several large-scale datasets in an unsupervised
manner and test it on all seen domains as well as several unseen domains to
validate the generalizability of our method. Our proposed unsupervised lifelong
method achieves strong generalizability, which significantly outperforms
previous lifelong methods on both seen and unseen domains. Code will be made
available at https://github.com/chenhao2345/UCR.
Related papers
- Anti-Forgetting Adaptation for Unsupervised Person Re-identification [87.0061997256388]
We propose a Dual-level Joint Adaptation and Anti-forgetting framework.
It incrementally adapts a model to new domains without forgetting source domain and each adapted target domain.
Our proposed method significantly improves the anti-forgetting, generalization and backward-compatible ability of an unsupervised person ReID model.
arXiv Detail & Related papers (2024-11-22T03:05:06Z) - Forget Less, Count Better: A Domain-Incremental Self-Distillation
Learning Benchmark for Lifelong Crowd Counting [51.44987756859706]
Off-the-shelf methods have some drawbacks to handle multiple domains.
Lifelong Crowd Counting aims at alleviating the catastrophic forgetting and improving the generalization ability.
arXiv Detail & Related papers (2022-05-06T15:37:56Z) - A Novel Mix-normalization Method for Generalizable Multi-source Person
Re-identification [49.548815417844786]
Person re-identification (Re-ID) has achieved great success in the supervised scenario.
It is difficult to directly transfer the supervised model to arbitrary unseen domains due to the model overfitting to the seen source domains.
We propose MixNorm, which consists of domain-aware mix-normalization (DMN) and domain-ware center regularization (DCR)
arXiv Detail & Related papers (2022-01-24T18:09:38Z) - Lifelong Person Re-Identification via Adaptive Knowledge Accumulation [18.4671957106297]
Lifelong person re-identification (LReID) enables to learn continuously across multiple domains.
We design an Adaptive Knowledge Accumulation framework that is endowed with two crucial abilities: knowledge representation and knowledge operation.
Our method alleviates catastrophic forgetting on seen domains and demonstrates the ability to generalize to unseen domains.
arXiv Detail & Related papers (2021-03-23T11:30:38Z) - Continual Adaptation of Visual Representations via Domain Randomization
and Meta-learning [21.50683576864347]
Most standard learning approaches lead to fragile models which are prone to drift when sequentially trained on samples of a different nature.
We show that one way to learn models that are inherently more robust against forgetting is domain randomization.
We devise a meta-learning strategy where a regularizer explicitly penalizes any loss associated with transferring the model from the current domain to different "auxiliary" meta-domains.
arXiv Detail & Related papers (2020-12-08T09:54:51Z) - Learning to Generalize Unseen Domains via Memory-based Multi-Source
Meta-Learning for Person Re-Identification [59.326456778057384]
We propose the Memory-based Multi-Source Meta-Learning framework to train a generalizable model for unseen domains.
We also present a meta batch normalization layer (MetaBN) to diversify meta-test features.
Experiments demonstrate that our M$3$L can effectively enhance the generalization ability of the model for unseen domains.
arXiv Detail & Related papers (2020-12-01T11:38:16Z) - Multi-Domain Adversarial Feature Generalization for Person
Re-Identification [52.835955258959785]
We propose a multi-dataset feature generalization network (MMFA-AAE)
It is capable of learning a universal domain-invariant feature representation from multiple labeled datasets and generalizing it to unseen' camera systems.
It also surpasses many state-of-the-art supervised methods and unsupervised domain adaptation methods by a large margin.
arXiv Detail & Related papers (2020-11-25T08:03:15Z) - A Review of Single-Source Deep Unsupervised Visual Domain Adaptation [81.07994783143533]
Large-scale labeled training datasets have enabled deep neural networks to excel across a wide range of benchmark vision tasks.
In many applications, it is prohibitively expensive and time-consuming to obtain large quantities of labeled data.
To cope with limited labeled training data, many have attempted to directly apply models trained on a large-scale labeled source domain to another sparsely labeled or unlabeled target domain.
arXiv Detail & Related papers (2020-09-01T00:06:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.