Forget Less, Count Better: A Domain-Incremental Self-Distillation
Learning Benchmark for Lifelong Crowd Counting
- URL: http://arxiv.org/abs/2205.03307v1
- Date: Fri, 6 May 2022 15:37:56 GMT
- Title: Forget Less, Count Better: A Domain-Incremental Self-Distillation
Learning Benchmark for Lifelong Crowd Counting
- Authors: Jiaqi Gao, Jingqi Li, Hongming Shan, Yanyun Qu, James Z. Wang, Junping
Zhang
- Abstract summary: Off-the-shelf methods have some drawbacks to handle multiple domains.
Lifelong Crowd Counting aims at alleviating the catastrophic forgetting and improving the generalization ability.
- Score: 51.44987756859706
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Crowd Counting has important applications in public safety and pandemic
control. A robust and practical crowd counting system has to be capable of
continuously learning with the new-coming domain data in real-world scenarios
instead of fitting one domain only. Off-the-shelf methods have some drawbacks
to handle multiple domains. 1) The models will achieve limited performance
(even drop dramatically) among old domains after training images from new
domains due to the discrepancies of intrinsic data distributions from various
domains, which is called catastrophic forgetting. 2) The well-trained model in
a specific domain achieves imperfect performance among other unseen domains
because of the domain shift. 3) It leads to linearly-increased storage overhead
either mixing all the data for training or simply training dozens of separate
models for different domains when new ones are available. To overcome these
issues, we investigate a new task of crowd counting under the incremental
domains training setting, namely, Lifelong Crowd Counting. It aims at
alleviating the catastrophic forgetting and improving the generalization
ability using a single model updated by the incremental domains. To be more
specific, we propose a self-distillation learning framework as a
benchmark~(Forget Less, Count Better, FLCB) for lifelong crowd counting, which
helps the model sustainably leverage previous meaningful knowledge for better
crowd counting to mitigate the forgetting when the new data arrive. Meanwhile,
a new quantitative metric, normalized backward transfer~(nBwT), is developed to
evaluate the forgetting degree of the model in the lifelong learning process.
Extensive experimental results demonstrate the superiority of our proposed
benchmark in achieving a low catastrophic forgetting degree and strong
generalization ability.
Related papers
- Multivariate Prototype Representation for Domain-Generalized Incremental
Learning [35.83706574551515]
We design a DGCIL approach that remembers old classes, adapts to new classes, and can classify reliably objects from unseen domains.
Our loss formulation maintains classification boundaries and suppresses the domain-specific information of each class.
arXiv Detail & Related papers (2023-09-24T06:42:04Z) - Domain-incremental Cardiac Image Segmentation with Style-oriented Replay
and Domain-sensitive Feature Whitening [67.6394526631557]
M&Ms should incrementally learn from each incoming dataset and progressively update with improved functionality as time goes by.
In medical scenarios, this is particularly challenging as accessing or storing past data is commonly not allowed due to data privacy.
We propose a novel domain-incremental learning framework to recover past domain inputs first and then regularly replay them during model optimization.
arXiv Detail & Related papers (2022-11-09T13:07:36Z) - Unsupervised Lifelong Person Re-identification via Contrastive Rehearsal [7.983523975392535]
Unsupervised lifelong person ReID focuses on continuously conducting unsupervised domain adaptation on new domains.
We set an image-to-image similarity constraint between old and new models to regularize the model updates in a way that suits old knowledge.
Our proposed lifelong method achieves strong generalizability, which significantly outperforms previous lifelong methods on both seen and unseen domains.
arXiv Detail & Related papers (2022-03-12T15:44:08Z) - On Generalizing Beyond Domains in Cross-Domain Continual Learning [91.56748415975683]
Deep neural networks often suffer from catastrophic forgetting of previously learned knowledge after learning a new task.
Our proposed approach learns new tasks under domain shift with accuracy boosts up to 10% on challenging datasets such as DomainNet and OfficeHome.
arXiv Detail & Related papers (2022-03-08T09:57:48Z) - A Review of Single-Source Deep Unsupervised Visual Domain Adaptation [81.07994783143533]
Large-scale labeled training datasets have enabled deep neural networks to excel across a wide range of benchmark vision tasks.
In many applications, it is prohibitively expensive and time-consuming to obtain large quantities of labeled data.
To cope with limited labeled training data, many have attempted to directly apply models trained on a large-scale labeled source domain to another sparsely labeled or unlabeled target domain.
arXiv Detail & Related papers (2020-09-01T00:06:50Z) - $n$-Reference Transfer Learning for Saliency Prediction [73.17061116358036]
We propose a few-shot transfer learning paradigm for saliency prediction.
The proposed framework is gradient-based and model-agnostic.
The results show that the proposed framework achieves a significant performance improvement.
arXiv Detail & Related papers (2020-07-09T23:20:44Z) - Domain Adaptation for Semantic Parsing [68.81787666086554]
We propose a novel semantic for domain adaptation, where we have much fewer annotated data in the target domain compared to the source domain.
Our semantic benefits from a two-stage coarse-to-fine framework, thus can provide different and accurate treatments for the two stages.
Experiments on a benchmark dataset show that our method consistently outperforms several popular domain adaptation strategies.
arXiv Detail & Related papers (2020-06-23T14:47:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.