Continual Representation Learning for Biometric Identification
- URL: http://arxiv.org/abs/2006.04455v2
- Date: Sun, 28 Jun 2020 19:54:51 GMT
- Title: Continual Representation Learning for Biometric Identification
- Authors: Bo Zhao, Shixiang Tang, Dapeng Chen, Hakan Bilen, Rui Zhao
- Abstract summary: We propose a new continual learning (CL) setting, namely continual representation learning'', which focuses on learning better representation in a continuous way.
We demonstrate that existing CL methods can improve the representation in the new setting, and our method achieves better results than the competitors.
- Score: 47.15075374158398
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the explosion of digital data in recent years, continuously learning new
tasks from a stream of data without forgetting previously acquired knowledge
has become increasingly important. In this paper, we propose a new continual
learning (CL) setting, namely ``continual representation learning'', which
focuses on learning better representation in a continuous way. We also provide
two large-scale multi-step benchmarks for biometric identification, where the
visual appearance of different classes are highly relevant. In contrast to
requiring the model to recognize more learned classes, we aim to learn feature
representation that can be better generalized to not only previously unseen
images but also unseen classes/identities. For the new setting, we propose a
novel approach that performs the knowledge distillation over a large number of
identities by applying the neighbourhood selection and consistency relaxation
strategies to improve scalability and flexibility of the continual learning
model. We demonstrate that existing CL methods can improve the representation
in the new setting, and our method achieves better results than the
competitors.
Related papers
- Incremental Object Detection with CLIP [36.478530086163744]
We propose a visual-language model such as CLIP to generate text feature embeddings for different class sets.
We then employ super-classes to replace the unavailable novel classes in the early learning stage to simulate the incremental scenario.
We incorporate the finely recognized detection boxes as pseudo-annotations into the training process, thereby further improving the detection performance.
arXiv Detail & Related papers (2023-10-13T01:59:39Z) - Multi-View Class Incremental Learning [57.14644913531313]
Multi-view learning (MVL) has gained great success in integrating information from multiple perspectives of a dataset to improve downstream task performance.
This paper investigates a novel paradigm called multi-view class incremental learning (MVCIL), where a single model incrementally classifies new classes from a continual stream of views.
arXiv Detail & Related papers (2023-06-16T08:13:41Z) - Mitigating Forgetting in Online Continual Learning via Contrasting
Semantically Distinct Augmentations [22.289830907729705]
Online continual learning (OCL) aims to enable model learning from a non-stationary data stream to continuously acquire new knowledge as well as retain the learnt one.
Main challenge comes from the "catastrophic forgetting" issue -- the inability to well remember the learnt knowledge while learning the new ones.
arXiv Detail & Related papers (2022-11-10T05:29:43Z) - Effects of Auxiliary Knowledge on Continual Learning [16.84113206569365]
In Continual Learning (CL), a neural network is trained on a stream of data whose distribution changes over time.
Most existing CL approaches focus on finding solutions to preserve acquired knowledge, so working on the past of the model.
We argue that as the model has to continually learn new tasks, it is also important to put focus on the present knowledge that could improve following tasks learning.
arXiv Detail & Related papers (2022-06-03T14:31:59Z) - LifeLonger: A Benchmark for Continual Disease Classification [59.13735398630546]
We introduce LifeLonger, a benchmark for continual disease classification on the MedMNIST collection.
Task and class incremental learning of diseases address the issue of classifying new samples without re-training the models from scratch.
Cross-domain incremental learning addresses the issue of dealing with datasets originating from different institutions while retaining the previously obtained knowledge.
arXiv Detail & Related papers (2022-04-12T12:25:05Z) - Long-tail Recognition via Compositional Knowledge Transfer [60.03764547406601]
We introduce a novel strategy for long-tail recognition that addresses the tail classes' few-shot problem.
Our objective is to transfer knowledge acquired from information-rich common classes to semantically similar, and yet data-hungry, rare classes.
Experiments show that our approach can achieve significant performance boosts on rare classes while maintaining robust common class performance.
arXiv Detail & Related papers (2021-12-13T15:48:59Z) - Semi-Supervising Learning, Transfer Learning, and Knowledge Distillation
with SimCLR [2.578242050187029]
Recent breakthroughs in the field of semi-supervised learning have achieved results that match state-of-the-art traditional supervised learning methods.
SimCLR is the current state-of-the-art semi-supervised learning framework for computer vision.
arXiv Detail & Related papers (2021-08-02T01:37:39Z) - Class-Balanced Distillation for Long-Tailed Visual Recognition [100.10293372607222]
Real-world imagery is often characterized by a significant imbalance of the number of images per class, leading to long-tailed distributions.
In this work, we introduce a new framework, by making the key observation that a feature representation learned with instance sampling is far from optimal in a long-tailed setting.
Our main contribution is a new training method, that leverages knowledge distillation to enhance feature representations.
arXiv Detail & Related papers (2021-04-12T08:21:03Z) - DER: Dynamically Expandable Representation for Class Incremental
Learning [30.573653645134524]
We address the problem of class incremental learning, which is a core step towards achieving adaptive vision intelligence.
We propose a novel two-stage learning approach that utilizes a dynamically expandable representation for more effective incremental concept modeling.
We conduct extensive experiments on the three class incremental learning benchmarks and our method consistently outperforms other methods with a large margin.
arXiv Detail & Related papers (2021-03-31T03:16:44Z) - Learning From Multiple Experts: Self-paced Knowledge Distillation for
Long-tailed Classification [106.08067870620218]
We propose a self-paced knowledge distillation framework, termed Learning From Multiple Experts (LFME)
We refer to these models as 'Experts', and the proposed LFME framework aggregates the knowledge from multiple 'Experts' to learn a unified student model.
We conduct extensive experiments and demonstrate that our method is able to achieve superior performances compared to state-of-the-art methods.
arXiv Detail & Related papers (2020-01-06T12:57:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.