Learnable Distribution Calibration for Few-Shot Class-Incremental
Learning
- URL: http://arxiv.org/abs/2210.00232v1
- Date: Sat, 1 Oct 2022 09:40:26 GMT
- Title: Learnable Distribution Calibration for Few-Shot Class-Incremental
Learning
- Authors: Binghao Liu, Boyu Yang, Lingxi Xie, Ren Wang, Qi Tian, Qixiang Ye
- Abstract summary: Few-shot class-incremental learning (FSCIL) faces challenges of memorizing old class distributions and estimating new class distributions given few training samples.
We propose a learnable distribution calibration (LDC) approach, with the aim to systematically solve these two challenges using a unified framework.
- Score: 122.2241120474278
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Few-shot class-incremental learning (FSCIL) faces challenges of memorizing
old class distributions and estimating new class distributions given few
training samples. In this study, we propose a learnable distribution
calibration (LDC) approach, with the aim to systematically solve these two
challenges using a unified framework. LDC is built upon a parameterized
calibration unit (PCU), which initializes biased distributions for all classes
based on classifier vectors (memory-free) and a single covariance matrix. The
covariance matrix is shared by all classes, so that the memory costs are fixed.
During base training, PCU is endowed with the ability to calibrate biased
distributions by recurrently updating sampled features under the supervision of
real distributions. During incremental learning, PCU recovers distributions for
old classes to avoid `forgetting', as well as estimating distributions and
augmenting samples for new classes to alleviate `over-fitting' caused by the
biased distributions of few-shot samples. LDC is theoretically plausible by
formatting a variational inference procedure. It improves FSCIL's flexibility
as the training procedure requires no class similarity priori. Experiments on
CUB200, CIFAR100, and mini-ImageNet datasets show that LDC outperforms the
state-of-the-arts by 4.64%, 1.98%, and 3.97%, respectively. LDC's effectiveness
is also validated on few-shot learning scenarios.
Related papers
- Probabilistic Contrastive Learning for Long-Tailed Visual Recognition [78.70453964041718]
Longtailed distributions frequently emerge in real-world data, where a large number of minority categories contain a limited number of samples.
Recent investigations have revealed that supervised contrastive learning exhibits promising potential in alleviating the data imbalance.
We propose a novel probabilistic contrastive (ProCo) learning algorithm that estimates the data distribution of the samples from each class in the feature space.
arXiv Detail & Related papers (2024-03-11T13:44:49Z) - Transfer Knowledge from Head to Tail: Uncertainty Calibration under
Long-tailed Distribution [24.734851889816206]
Current calibration techniques treat different classes equally and implicitly assume that the distribution of training data is balanced.
We propose a novel knowledge-transferring-based calibration method by estimating the importance weights for samples of tail classes.
arXiv Detail & Related papers (2023-04-13T13:48:18Z) - Proposal Distribution Calibration for Few-Shot Object Detection [65.19808035019031]
In few-shot object detection (FSOD), the two-step training paradigm is widely adopted to mitigate the severe sample imbalance.
Unfortunately, the extreme data scarcity aggravates the proposal distribution bias, hindering the RoI head from evolving toward novel classes.
We introduce a simple yet effective proposal distribution calibration (PDC) approach to neatly enhance the localization and classification abilities of the RoI head.
arXiv Detail & Related papers (2022-12-15T05:09:11Z) - Compound Batch Normalization for Long-tailed Image Classification [77.42829178064807]
We propose a compound batch normalization method based on a Gaussian mixture.
It can model the feature space more comprehensively and reduce the dominance of head classes.
The proposed method outperforms existing methods on long-tailed image classification.
arXiv Detail & Related papers (2022-12-02T07:31:39Z) - Adaptive Distribution Calibration for Few-Shot Learning with
Hierarchical Optimal Transport [78.9167477093745]
We propose a novel distribution calibration method by learning the adaptive weight matrix between novel samples and base classes.
Experimental results on standard benchmarks demonstrate that our proposed plug-and-play model outperforms competing approaches.
arXiv Detail & Related papers (2022-10-09T02:32:57Z) - Free Lunch for Few-shot Learning: Distribution Calibration [10.474018806591397]
We show that a simple logistic regression classifier trained using the features sampled from our calibrated distribution can outperform the state-of-the-art accuracy on two datasets.
arXiv Detail & Related papers (2021-01-16T07:58:40Z) - Distributional Reinforcement Learning via Moment Matching [54.16108052278444]
We formulate a method that learns a finite set of statistics from each return distribution via neural networks.
Our method can be interpreted as implicitly matching all orders of moments between a return distribution and its Bellman target.
Experiments on the suite of Atari games show that our method outperforms the standard distributional RL baselines.
arXiv Detail & Related papers (2020-07-24T05:18:17Z) - Imbalanced Data Learning by Minority Class Augmentation using Capsule
Adversarial Networks [31.073558420480964]
We propose a method to restore the balance in imbalanced images, by coalescing two concurrent methods.
In our model, generative and discriminative networks play a novel competitive game.
The coalescing of capsule-GAN is effective at recognizing highly overlapping classes with much fewer parameters compared with the convolutional-GAN.
arXiv Detail & Related papers (2020-04-05T12:36:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.