Understanding the Detrimental Class-level Effects of Data Augmentation
- URL: http://arxiv.org/abs/2401.01764v1
- Date: Thu, 7 Dec 2023 18:37:43 GMT
- Title: Understanding the Detrimental Class-level Effects of Data Augmentation
- Authors: Polina Kirichenko, Mark Ibrahim, Randall Balestriero, Diane
Bouchacourt, Ramakrishna Vedantam, Hamed Firooz, Andrew Gordon Wilson
- Abstract summary: achieving optimal average accuracy comes at the cost of significantly hurting individual class accuracy by as much as 20% on ImageNet.
We present a framework for understanding how DA interacts with class-level learning dynamics.
We show that simple class-conditional augmentation strategies improve performance on the negatively affected classes.
- Score: 63.1733767714073
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Data augmentation (DA) encodes invariance and provides implicit
regularization critical to a model's performance in image classification tasks.
However, while DA improves average accuracy, recent studies have shown that its
impact can be highly class dependent: achieving optimal average accuracy comes
at the cost of significantly hurting individual class accuracy by as much as
20% on ImageNet. There has been little progress in resolving class-level
accuracy drops due to a limited understanding of these effects. In this work,
we present a framework for understanding how DA interacts with class-level
learning dynamics. Using higher-quality multi-label annotations on ImageNet, we
systematically categorize the affected classes and find that the majority are
inherently ambiguous, co-occur, or involve fine-grained distinctions, while DA
controls the model's bias towards one of the closely related classes. While
many of the previously reported performance drops are explained by multi-label
annotations, our analysis of class confusions reveals other sources of accuracy
degradation. We show that simple class-conditional augmentation strategies
informed by our framework improve performance on the negatively affected
classes.
Related papers
- Extract More from Less: Efficient Fine-Grained Visual Recognition in Low-Data Regimes [0.22499166814992438]
We present a novel framework, called AD-Net, aiming to enhance deep neural network performance on this challenge.
Specifically, our approach is designed to refine learned features through self-distillation on augmented samples, mitigating harmful overfitting.
With the smallest data available, our framework shows an outstanding relative accuracy increase of up to 45 %.
arXiv Detail & Related papers (2024-06-28T10:45:25Z) - CAT: Exploiting Inter-Class Dynamics for Domain Adaptive Object Detection [22.11525246060963]
We propose Class-Aware Teacher (CAT) to address the class bias issue in the domain adaptation setting.
In our work, we approximate the class relationships with our Inter-Class Relation module (ICRm) and exploit it to reduce the bias within the model.
Experiments conducted on various datasets and ablation studies show that our method is able to address the class bias in the domain adaptation setting.
arXiv Detail & Related papers (2024-03-28T10:02:08Z) - Better (pseudo-)labels for semi-supervised instance segmentation [21.703173564795353]
We introduce a dual-strategy to enhance the teacher model's training process, substantially improving the performance on few-shot learning.
We observe marked improvements over a state-of-the-art supervised baseline performance on the LVIS dataset, with an increase of 2.8% in average precision (AP) and 10.3% gain in AP for rare classes.
arXiv Detail & Related papers (2024-03-18T11:23:02Z) - Classes Are Not Equal: An Empirical Study on Image Recognition Fairness [100.36114135663836]
We experimentally demonstrate that classes are not equal and the fairness issue is prevalent for image classification models across various datasets.
Our findings reveal that models tend to exhibit greater prediction biases for classes that are more challenging to recognize.
Data augmentation and representation learning algorithms improve overall performance by promoting fairness to some degree in image classification.
arXiv Detail & Related papers (2024-02-28T07:54:50Z) - Class-attribute Priors: Adapting Optimization to Heterogeneity and
Fairness Objective [54.33066660817495]
Modern classification problems exhibit heterogeneities across individual classes.
We propose CAP: An effective and general method that generates a class-specific learning strategy.
We show that CAP is competitive with prior art and its flexibility unlocks clear benefits for fairness objectives beyond balanced accuracy.
arXiv Detail & Related papers (2024-01-25T17:43:39Z) - Class-Incremental Learning: A Survey [84.30083092434938]
Class-Incremental Learning (CIL) enables the learner to incorporate the knowledge of new classes incrementally.
CIL tends to catastrophically forget the characteristics of former ones, and its performance drastically degrades.
We provide a rigorous and unified evaluation of 17 methods in benchmark image classification tasks to find out the characteristics of different algorithms.
arXiv Detail & Related papers (2023-02-07T17:59:05Z) - Relieving Long-tailed Instance Segmentation via Pairwise Class Balance [85.53585498649252]
Long-tailed instance segmentation is a challenging task due to the extreme imbalance of training samples among classes.
It causes severe biases of the head classes (with majority samples) against the tailed ones.
We propose a novel Pairwise Class Balance (PCB) method, built upon a confusion matrix which is updated during training to accumulate the ongoing prediction preferences.
arXiv Detail & Related papers (2022-01-08T07:48:36Z) - Not All Negatives are Equal: Label-Aware Contrastive Loss for
Fine-grained Text Classification [0.0]
We analyse the contrastive fine-tuning of pre-trained language models on two fine-grained text classification tasks.
We adaptively embed class relationships into a contrastive objective function to help differently weigh the positives and negatives.
We find that Label-aware Contrastive Loss outperforms previous contrastive methods.
arXiv Detail & Related papers (2021-09-12T04:19:17Z) - Fair Comparison: Quantifying Variance in Resultsfor Fine-grained Visual
Categorization [0.5735035463793008]
Average categorization accuracy is often used in isolation.
As the number of classes increases, the amount of information conveyed by average accuracy alone dwindles.
While its most glaring weakness is its failure to describe the model's performance on a class-by-class basis, average accuracy also fails to describe how performance may vary from one trained model of the same architecture, to another.
arXiv Detail & Related papers (2021-09-07T15:47:27Z) - Calibrating Class Activation Maps for Long-Tailed Visual Recognition [60.77124328049557]
We present two effective modifications of CNNs to improve network learning from long-tailed distribution.
First, we present a Class Activation Map (CAMC) module to improve the learning and prediction of network classifiers.
Second, we investigate the use of normalized classifiers for representation learning in long-tailed problems.
arXiv Detail & Related papers (2021-08-29T05:45:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.