Energy Aligning for Biased Models
- URL: http://arxiv.org/abs/2106.03343v1
- Date: Mon, 7 Jun 2021 05:12:26 GMT
- Title: Energy Aligning for Biased Models
- Authors: Bowen Zhao and Chen Chen and Qi Ju and ShuTao Xia
- Abstract summary: Training on class-imbalanced data usually results in biased models that tend to predict samples into the majority classes.
We propose a simple and effective method named Energy Aligning to eliminate the bias.
Experimental results show that energy aligning can effectively alleviate class imbalance issue and outperform state-of-the-art methods on several benchmarks.
- Score: 39.00256193731365
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Training on class-imbalanced data usually results in biased models that tend
to predict samples into the majority classes, which is a common and notorious
problem. From the perspective of energy-based model, we demonstrate that the
free energies of categories are aligned with the label distribution
theoretically, thus the energies of different classes are expected to be close
to each other when aiming for ``balanced'' performance. However, we discover a
severe energy-bias phenomenon in the models trained on class-imbalanced
dataset. To eliminate the bias, we propose a simple and effective method named
Energy Aligning by merely adding the calculated shift scalars onto the output
logits during inference, which does not require to (i) modify the network
architectures, (ii) intervene the standard learning paradigm, (iii) perform
two-stage training. The proposed algorithm is evaluated on two class
imbalance-related tasks under various settings: class incremental learning and
long-tailed recognition. Experimental results show that energy aligning can
effectively alleviate class imbalance issue and outperform state-of-the-art
methods on several benchmarks.
Related papers
- Balanced Data, Imbalanced Spectra: Unveiling Class Disparities with Spectral Imbalance [11.924440950433658]
We introduce the concept of spectral imbalance in features as a potential source for class disparities.
We derive exact expressions for the per-class error in a high-dimensional mixture model setting.
We study this phenomenon in 11 different state-of-the-art pretrained encoders.
arXiv Detail & Related papers (2024-02-18T23:59:54Z) - Bias Mitigating Few-Shot Class-Incremental Learning [17.185744533050116]
Few-shot class-incremental learning aims at recognizing novel classes continually with limited novel class samples.
Recent methods somewhat alleviate the accuracy imbalance between base and incremental classes by fine-tuning the feature extractor in the incremental sessions.
We propose a novel method to mitigate model bias of the FSCIL problem during training and inference processes.
arXiv Detail & Related papers (2024-02-01T10:37:41Z) - Twice Class Bias Correction for Imbalanced Semi-Supervised Learning [59.90429949214134]
We introduce a novel approach called textbfTwice textbfClass textbfBias textbfCorrection (textbfTCBC)
We estimate the class bias of the model parameters during the training process.
We apply a secondary correction to the model's pseudo-labels for unlabeled samples.
arXiv Detail & Related papers (2023-12-27T15:06:36Z) - Simplifying Neural Network Training Under Class Imbalance [77.39968702907817]
Real-world datasets are often highly class-imbalanced, which can adversely impact the performance of deep learning models.
The majority of research on training neural networks under class imbalance has focused on specialized loss functions, sampling techniques, or two-stage training procedures.
We demonstrate that simply tuning existing components of standard deep learning pipelines, such as the batch size, data augmentation, and label smoothing, can achieve state-of-the-art performance without any such specialized class imbalance methods.
arXiv Detail & Related papers (2023-12-05T05:52:44Z) - Class-Imbalanced Graph Learning without Class Rebalancing [62.1368829847041]
Class imbalance is prevalent in real-world node classification tasks and poses great challenges for graph learning models.
In this work, we approach the root cause of class-imbalance bias from an topological paradigm.
We devise a lightweight topological augmentation framework BAT to mitigate the class-imbalance bias without class rebalancing.
arXiv Detail & Related papers (2023-08-27T19:01:29Z) - Pre-training Language Model as a Multi-perspective Course Learner [103.17674402415582]
This study proposes a multi-perspective course learning (MCL) method for sample-efficient pre-training.
In this study, three self-supervision courses are designed to alleviate inherent flaws of "tug-of-war" dynamics.
Our method significantly improves ELECTRA's average performance by 2.8% and 3.2% absolute points respectively on GLUE and SQuAD 2.0 benchmarks.
arXiv Detail & Related papers (2023-05-06T09:02:10Z) - Rethinking Class Imbalance in Machine Learning [1.4467794332678536]
Imbalance learning is a subfield of machine learning that focuses on learning tasks in the presence of class imbalance.
This study presents a new taxonomy of class imbalance in machine learning with a broader scope.
We propose a new logit perturbation-based imbalance learning loss when proportion, variance, and distance imbalances exist simultaneously.
arXiv Detail & Related papers (2023-05-06T02:36:39Z) - Learning to Adapt Classifier for Imbalanced Semi-supervised Learning [38.434729550279116]
Pseudo-labeling has proven to be a promising semi-supervised learning (SSL) paradigm.
Existing pseudo-labeling methods commonly assume that the class distributions of training data are balanced.
In this work, we investigate pseudo-labeling under imbalanced semi-supervised setups.
arXiv Detail & Related papers (2022-07-28T02:15:47Z) - FairIF: Boosting Fairness in Deep Learning via Influence Functions with
Validation Set Sensitive Attributes [51.02407217197623]
We propose a two-stage training algorithm named FAIRIF.
It minimizes the loss over the reweighted data set where the sample weights are computed.
We show that FAIRIF yields models with better fairness-utility trade-offs against various types of bias.
arXiv Detail & Related papers (2022-01-15T05:14:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.