Class-attribute Priors: Adapting Optimization to Heterogeneity and
Fairness Objective
- URL: http://arxiv.org/abs/2401.14343v1
- Date: Thu, 25 Jan 2024 17:43:39 GMT
- Title: Class-attribute Priors: Adapting Optimization to Heterogeneity and
Fairness Objective
- Authors: Xuechen Zhang, Mingchen Li, Jiasi Chen, Christos Thrampoulidis, Samet
Oymak
- Abstract summary: Modern classification problems exhibit heterogeneities across individual classes.
We propose CAP: An effective and general method that generates a class-specific learning strategy.
We show that CAP is competitive with prior art and its flexibility unlocks clear benefits for fairness objectives beyond balanced accuracy.
- Score: 54.33066660817495
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Modern classification problems exhibit heterogeneities across individual
classes: Each class may have unique attributes, such as sample size, label
quality, or predictability (easy vs difficult), and variable importance at
test-time. Without care, these heterogeneities impede the learning process,
most notably, when optimizing fairness objectives. Confirming this, under a
gaussian mixture setting, we show that the optimal SVM classifier for balanced
accuracy needs to be adaptive to the class attributes. This motivates us to
propose CAP: An effective and general method that generates a class-specific
learning strategy (e.g. hyperparameter) based on the attributes of that class.
This way, optimization process better adapts to heterogeneities. CAP leads to
substantial improvements over the naive approach of assigning separate
hyperparameters to each class. We instantiate CAP for loss function design and
post-hoc logit adjustment, with emphasis on label-imbalanced problems. We show
that CAP is competitive with prior art and its flexibility unlocks clear
benefits for fairness objectives beyond balanced accuracy. Finally, we evaluate
CAP on problems with label noise as well as weighted test objectives to
showcase how CAP can jointly adapt to different heterogeneities.
Related papers
- Adaptive Margin Global Classifier for Exemplar-Free Class-Incremental Learning [3.4069627091757178]
Existing methods mainly focus on handling biased learning.
We introduce a Distribution-Based Global (DBGC) to avoid bias factors in existing methods, such as data imbalance and sampling.
More importantly, the compromised distributions of old classes are simulated via a simple operation, variance (VE).
This loss is proven equivalent to an Adaptive Margin Softmax Cross Entropy (AMarX)
arXiv Detail & Related papers (2024-09-20T07:07:23Z) - Covariance-corrected Whitening Alleviates Network Degeneration on Imbalanced Classification [6.197116272789107]
Class imbalance is a critical issue in image classification that significantly affects the performance of deep recognition models.
We propose a novel framework called Whitening-Net to mitigate the degenerate solutions.
In scenarios with extreme class imbalance, the batch covariance statistic exhibits significant fluctuations, impeding the convergence of the whitening operation.
arXiv Detail & Related papers (2024-08-30T10:49:33Z) - Understanding the Detrimental Class-level Effects of Data Augmentation [63.1733767714073]
achieving optimal average accuracy comes at the cost of significantly hurting individual class accuracy by as much as 20% on ImageNet.
We present a framework for understanding how DA interacts with class-level learning dynamics.
We show that simple class-conditional augmentation strategies improve performance on the negatively affected classes.
arXiv Detail & Related papers (2023-12-07T18:37:43Z) - Uncertainty-guided Boundary Learning for Imbalanced Social Event
Detection [64.4350027428928]
We propose a novel uncertainty-guided class imbalance learning framework for imbalanced social event detection tasks.
Our model significantly improves social event representation and classification tasks in almost all classes, especially those uncertain ones.
arXiv Detail & Related papers (2023-10-30T03:32:04Z) - Complementary Labels Learning with Augmented Classes [22.460256396941528]
Complementary Labels Learning (CLL) arises in many real-world tasks such as private questions classification and online learning.
We propose a novel problem setting called Complementary Labels Learning with Augmented Classes (CLLAC)
By using unlabeled data, we propose an unbiased estimator of classification risk for CLLAC, which is guaranteed to be provably consistent.
arXiv Detail & Related papers (2022-11-19T13:55:27Z) - Prototypical Classifier for Robust Class-Imbalanced Learning [64.96088324684683]
We propose textitPrototypical, which does not require fitting additional parameters given the embedding network.
Prototypical produces balanced and comparable predictions for all classes even though the training set is class-imbalanced.
We test our method on CIFAR-10LT, CIFAR-100LT and Webvision datasets, observing that Prototypical obtains substaintial improvements compared with state of the arts.
arXiv Detail & Related papers (2021-10-22T01:55:01Z) - Fair Tree Learning [0.15229257192293202]
Various optimisation criteria combine classification performance with a fairness metric.
Current fair decision tree methods only optimise for a fixed threshold on both the classification task as well as the fairness metric.
We propose a threshold-independent fairness metric termed uniform demographic parity, and a derived splitting criterion entitled SCAFF -- Splitting Criterion AUC for Fairness.
arXiv Detail & Related papers (2021-10-18T13:40:25Z) - Causal Feature Selection for Algorithmic Fairness [61.767399505764736]
We consider fairness in the integration component of data management.
We propose an approach to identify a sub-collection of features that ensure the fairness of the dataset.
arXiv Detail & Related papers (2020-06-10T20:20:10Z) - Boosting Few-Shot Learning With Adaptive Margin Loss [109.03665126222619]
This paper proposes an adaptive margin principle to improve the generalization ability of metric-based meta-learning approaches for few-shot learning problems.
Extensive experiments demonstrate that the proposed method can boost the performance of current metric-based meta-learning approaches.
arXiv Detail & Related papers (2020-05-28T07:58:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.