Prediction Error-based Classification for Class-Incremental Learning
- URL: http://arxiv.org/abs/2305.18806v2
- Date: Sat, 9 Mar 2024 09:28:20 GMT
- Title: Prediction Error-based Classification for Class-Incremental Learning
- Authors: Micha{\l} Zaj\k{a}c, Tinne Tuytelaars, Gido M. van de Ven
- Abstract summary: We introduce Prediction Error-based Classification (PEC)
PEC computes a class score by measuring the prediction error of a model trained to replicate the outputs of a frozen random neural network on data from that class.
PEC offers several practical advantages, including sample efficiency, ease of tuning, and effectiveness even when data are presented one class at a time.
- Score: 39.91805363069707
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Class-incremental learning (CIL) is a particularly challenging variant of
continual learning, where the goal is to learn to discriminate between all
classes presented in an incremental fashion. Existing approaches often suffer
from excessive forgetting and imbalance of the scores assigned to classes that
have not been seen together during training. In this study, we introduce a
novel approach, Prediction Error-based Classification (PEC), which differs from
traditional discriminative and generative classification paradigms. PEC
computes a class score by measuring the prediction error of a model trained to
replicate the outputs of a frozen random neural network on data from that
class. The method can be interpreted as approximating a classification rule
based on Gaussian Process posterior variance. PEC offers several practical
advantages, including sample efficiency, ease of tuning, and effectiveness even
when data are presented one class at a time. Our empirical results show that
PEC performs strongly in single-pass-through-data CIL, outperforming other
rehearsal-free baselines in all cases and rehearsal-based methods with moderate
replay buffer size in most cases across multiple benchmarks.
Related papers
- An Efficient Replay for Class-Incremental Learning with Pre-trained Models [0.0]
In class-incremental learning, the steady state among the weight guided by each class center is disrupted, which is significantly correlated with forgetting.
We propose a new method to overcoming forgetting.
arXiv Detail & Related papers (2024-08-15T11:26:28Z) - Pairwise Difference Learning for Classification [19.221081896134567]
Pairwise difference learning (PDL) has recently been introduced as a new meta-learning technique for regression.
We extend PDL toward the task of classification by solving a suitably defined (binary) classification problem on a paired version of the original training data.
We provide an easy-to-use and publicly available implementation of PDL in a Python package.
arXiv Detail & Related papers (2024-06-28T16:20:22Z) - Rethinking Classifier Re-Training in Long-Tailed Recognition: A Simple
Logits Retargeting Approach [102.0769560460338]
We develop a simple logits approach (LORT) without the requirement of prior knowledge of the number of samples per class.
Our method achieves state-of-the-art performance on various imbalanced datasets, including CIFAR100-LT, ImageNet-LT, and iNaturalist 2018.
arXiv Detail & Related papers (2024-03-01T03:27:08Z) - Bias Mitigating Few-Shot Class-Incremental Learning [17.185744533050116]
Few-shot class-incremental learning aims at recognizing novel classes continually with limited novel class samples.
Recent methods somewhat alleviate the accuracy imbalance between base and incremental classes by fine-tuning the feature extractor in the incremental sessions.
We propose a novel method to mitigate model bias of the FSCIL problem during training and inference processes.
arXiv Detail & Related papers (2024-02-01T10:37:41Z) - A Unified Generalization Analysis of Re-Weighting and Logit-Adjustment
for Imbalanced Learning [129.63326990812234]
We propose a technique named data-dependent contraction to capture how modified losses handle different classes.
On top of this technique, a fine-grained generalization bound is established for imbalanced learning, which helps reveal the mystery of re-weighting and logit-adjustment.
arXiv Detail & Related papers (2023-10-07T09:15:08Z) - Complementary Labels Learning with Augmented Classes [22.460256396941528]
Complementary Labels Learning (CLL) arises in many real-world tasks such as private questions classification and online learning.
We propose a novel problem setting called Complementary Labels Learning with Augmented Classes (CLLAC)
By using unlabeled data, we propose an unbiased estimator of classification risk for CLLAC, which is guaranteed to be provably consistent.
arXiv Detail & Related papers (2022-11-19T13:55:27Z) - Relieving Long-tailed Instance Segmentation via Pairwise Class Balance [85.53585498649252]
Long-tailed instance segmentation is a challenging task due to the extreme imbalance of training samples among classes.
It causes severe biases of the head classes (with majority samples) against the tailed ones.
We propose a novel Pairwise Class Balance (PCB) method, built upon a confusion matrix which is updated during training to accumulate the ongoing prediction preferences.
arXiv Detail & Related papers (2022-01-08T07:48:36Z) - When in Doubt: Improving Classification Performance with Alternating
Normalization [57.39356691967766]
We introduce Classification with Alternating Normalization (CAN), a non-parametric post-processing step for classification.
CAN improves classification accuracy for challenging examples by re-adjusting their predicted class probability distribution.
We empirically demonstrate its effectiveness across a diverse set of classification tasks.
arXiv Detail & Related papers (2021-09-28T02:55:42Z) - Solving Long-tailed Recognition with Deep Realistic Taxonomic Classifier [68.38233199030908]
Long-tail recognition tackles the natural non-uniformly distributed data in realworld scenarios.
While moderns perform well on populated classes, its performance degrades significantly on tail classes.
Deep-RTC is proposed as a new solution to the long-tail problem, combining realism with hierarchical predictions.
arXiv Detail & Related papers (2020-07-20T05:57:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.