Exemplar-free Class Incremental Learning via Discriminative and
Comparable One-class Classifiers
- URL: http://arxiv.org/abs/2201.01488v1
- Date: Wed, 5 Jan 2022 07:16:34 GMT
- Title: Exemplar-free Class Incremental Learning via Discriminative and
Comparable One-class Classifiers
- Authors: Wenju Sun, Qingyong Li, Jing Zhang, Danyu Wang, Wen Wang, Yangli-ao
Geng
- Abstract summary: We propose a new framework, named Discriminative and Comparable One-class classifiers for Incremental Learning (DisCOIL)
DisCOIL follows the basic principle of POC, but it adopts variational auto-encoders (VAE) instead of other well-established one-class classifiers (e.g. deep SVDD)
With this advantage, DisCOIL trains a new-class VAE in contrast with the old-class VAEs, which forces the new-class VAE to reconstruct better for new-class samples but worse for the old-class pseudo samples, thus enhancing the
- Score: 12.121885324463388
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The exemplar-free class incremental learning requires classification models
to learn new class knowledge incrementally without retaining any old samples.
Recently, the framework based on parallel one-class classifiers (POC), which
trains a one-class classifier (OCC) independently for each category, has
attracted extensive attention, since it can naturally avoid catastrophic
forgetting. POC, however, suffers from weak discriminability and comparability
due to its independent training strategy for different OOCs. To meet this
challenge, we propose a new framework, named Discriminative and Comparable
One-class classifiers for Incremental Learning (DisCOIL). DisCOIL follows the
basic principle of POC, but it adopts variational auto-encoders (VAE) instead
of other well-established one-class classifiers (e.g. deep SVDD), because a
trained VAE can not only identify the probability of an input sample belonging
to a class but also generate pseudo samples of the class to assist in learning
new tasks. With this advantage, DisCOIL trains a new-class VAE in contrast with
the old-class VAEs, which forces the new-class VAE to reconstruct better for
new-class samples but worse for the old-class pseudo samples, thus enhancing
the comparability. Furthermore, DisCOIL introduces a hinge reconstruction loss
to ensure the discriminability. We evaluate our method extensively on MNIST,
CIFAR10, and Tiny-ImageNet. The experimental results show that DisCOIL achieves
state-of-the-art performance.
Related papers
- Adaptive Margin Global Classifier for Exemplar-Free Class-Incremental Learning [3.4069627091757178]
Existing methods mainly focus on handling biased learning.
We introduce a Distribution-Based Global (DBGC) to avoid bias factors in existing methods, such as data imbalance and sampling.
More importantly, the compromised distributions of old classes are simulated via a simple operation, variance (VE).
This loss is proven equivalent to an Adaptive Margin Softmax Cross Entropy (AMarX)
arXiv Detail & Related papers (2024-09-20T07:07:23Z) - CBR - Boosting Adaptive Classification By Retrieval of Encrypted Network Traffic with Out-of-distribution [9.693391036125908]
One of the common approaches is using Machine learning or Deep Learning-based solutions on a fixed number of classes.
One of the solutions for handling unknown classes is to retrain the model, however, retraining models every time they become obsolete is both resource and time-consuming.
In this paper, we introduce Adaptive Classification By Retrieval CBR, a novel approach for encrypted network traffic classification.
arXiv Detail & Related papers (2024-03-17T13:14:09Z) - Few-Shot Class-Incremental Learning via Training-Free Prototype
Calibration [67.69532794049445]
We find a tendency for existing methods to misclassify the samples of new classes into base classes, which leads to the poor performance of new classes.
We propose a simple yet effective Training-frEE calibratioN (TEEN) strategy to enhance the discriminability of new classes.
arXiv Detail & Related papers (2023-12-08T18:24:08Z) - Generalization Bounds for Few-Shot Transfer Learning with Pretrained
Classifiers [26.844410679685424]
We study the ability of foundation models to learn representations for classification that are transferable to new, unseen classes.
We show that the few-shot error of the learned feature map on new classes is small in case of class-feature-variability collapse.
arXiv Detail & Related papers (2022-12-23T18:46:05Z) - Class-incremental Novel Class Discovery [76.35226130521758]
We study the new task of class-incremental Novel Class Discovery (class-iNCD)
We propose a novel approach for class-iNCD which prevents forgetting of past information about the base classes.
Our experiments, conducted on three common benchmarks, demonstrate that our method significantly outperforms state-of-the-art approaches.
arXiv Detail & Related papers (2022-07-18T13:49:27Z) - Prototypical Classifier for Robust Class-Imbalanced Learning [64.96088324684683]
We propose textitPrototypical, which does not require fitting additional parameters given the embedding network.
Prototypical produces balanced and comparable predictions for all classes even though the training set is class-imbalanced.
We test our method on CIFAR-10LT, CIFAR-100LT and Webvision datasets, observing that Prototypical obtains substaintial improvements compared with state of the arts.
arXiv Detail & Related papers (2021-10-22T01:55:01Z) - Neighborhood Contrastive Learning for Novel Class Discovery [79.14767688903028]
We build a new framework, named Neighborhood Contrastive Learning, to learn discriminative representations that are important to clustering performance.
We experimentally demonstrate that these two ingredients significantly contribute to clustering performance and lead our model to outperform state-of-the-art methods by a large margin.
arXiv Detail & Related papers (2021-06-20T17:34:55Z) - Counterfactual Zero-Shot and Open-Set Visual Recognition [95.43275761833804]
We present a novel counterfactual framework for both Zero-Shot Learning (ZSL) and Open-Set Recognition (OSR)
Our idea stems from the observation that the generated samples for unseen-classes are often out of the true distribution.
We demonstrate that our framework effectively mitigates the seen/unseen imbalance and hence significantly improves the overall performance.
arXiv Detail & Related papers (2021-03-01T10:20:04Z) - Minimum Variance Embedded Auto-associative Kernel Extreme Learning
Machine for One-class Classification [1.4146420810689422]
VAAKELM is a novel extension of an auto-associative kernel extreme learning machine.
It embeds minimum variance information within its architecture and reduces the intra-class variance.
It follows a reconstruction-based approach to one-class classification and minimizes the reconstruction error.
arXiv Detail & Related papers (2020-11-24T17:00:30Z) - Learning and Evaluating Representations for Deep One-class
Classification [59.095144932794646]
We present a two-stage framework for deep one-class classification.
We first learn self-supervised representations from one-class data, and then build one-class classifiers on learned representations.
In experiments, we demonstrate state-of-the-art performance on visual domain one-class classification benchmarks.
arXiv Detail & Related papers (2020-11-04T23:33:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.