ImbSAM: A Closer Look at Sharpness-Aware Minimization in
Class-Imbalanced Recognition
- URL: http://arxiv.org/abs/2308.07815v1
- Date: Tue, 15 Aug 2023 14:46:32 GMT
- Title: ImbSAM: A Closer Look at Sharpness-Aware Minimization in
Class-Imbalanced Recognition
- Authors: Yixuan Zhou, Yi Qu, Xing Xu, Hengtao Shen
- Abstract summary: We show that the Sharpness-Aware Minimization (SAM) fails to address generalization issues under the class-imbalanced setting.
We propose a class-aware smoothness optimization algorithm named Imbalanced-SAM (ImbSAM) to overcome this bottleneck.
Our ImbSAM demonstrates remarkable performance improvements for tail classes and anomaly.
- Score: 62.20538402226608
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Class imbalance is a common challenge in real-world recognition tasks, where
the majority of classes have few samples, also known as tail classes. We
address this challenge with the perspective of generalization and empirically
find that the promising Sharpness-Aware Minimization (SAM) fails to address
generalization issues under the class-imbalanced setting. Through investigating
this specific type of task, we identify that its generalization bottleneck
primarily lies in the severe overfitting for tail classes with limited training
data. To overcome this bottleneck, we leverage class priors to restrict the
generalization scope of the class-agnostic SAM and propose a class-aware
smoothness optimization algorithm named Imbalanced-SAM (ImbSAM). With the
guidance of class priors, our ImbSAM specifically improves generalization
targeting tail classes. We also verify the efficacy of ImbSAM on two
prototypical applications of class-imbalanced recognition: long-tailed
classification and semi-supervised anomaly detection, where our ImbSAM
demonstrates remarkable performance improvements for tail classes and anomaly.
Our code implementation is available at
https://github.com/cool-xuan/Imbalanced_SAM.
Related papers
- Friendly Sharpness-Aware Minimization [62.57515991835801]
Sharpness-Aware Minimization (SAM) has been instrumental in improving deep neural network training by minimizing both training loss and loss sharpness.
We investigate the key role of batch-specific gradient noise within the adversarial perturbation, i.e., the current minibatch gradient.
By decomposing the adversarial gradient noise components, we discover that relying solely on the full gradient degrades generalization while excluding it leads to improved performance.
arXiv Detail & Related papers (2024-03-19T01:39:33Z) - WeakSAM: Segment Anything Meets Weakly-supervised Instance-level
Recognition [40.711009448103354]
Weakly supervised visual recognition using inexact supervision is a critical yet challenging learning problem.
This paper introduces WeakSAM and solves the weakly-supervised object detection (WSOD) and segmentation by utilizing the pre-learned world knowledge contained in a vision foundation model, i.e., the Segment Anything Model (SAM)
Our results indicate that WeakSAM significantly surpasses previous state-of-the-art methods in WSOD and WSIS benchmarks with large margins, i.e. average improvements of 7.4% and 8.5%, respectively.
arXiv Detail & Related papers (2024-02-22T18:59:24Z) - Uncertainty-guided Boundary Learning for Imbalanced Social Event
Detection [64.4350027428928]
We propose a novel uncertainty-guided class imbalance learning framework for imbalanced social event detection tasks.
Our model significantly improves social event representation and classification tasks in almost all classes, especially those uncertain ones.
arXiv Detail & Related papers (2023-10-30T03:32:04Z) - Normalization Layers Are All That Sharpness-Aware Minimization Needs [53.799769473526275]
Sharpness-aware minimization (SAM) was proposed to reduce sharpness of minima.
We show that perturbing only the affine normalization parameters (typically comprising 0.1% of the total parameters) in the adversarial step of SAM can outperform perturbing all of the parameters.
arXiv Detail & Related papers (2023-06-07T08:05:46Z) - Sharpness-Aware Minimization Revisited: Weighted Sharpness as a
Regularization Term [4.719514928428503]
We propose a more general method, called WSAM, by incorporating sharpness as a regularization term.
We prove its generalization bound through the combination of PAC and Bayes-PAC techniques.
The results demonstrate that WSAM achieves improved generalization, or is at least highly competitive, compared to the vanilla, SAM and its variants.
arXiv Detail & Related papers (2023-05-25T08:00:34Z) - Invariant Feature Learning for Generalized Long-Tailed Classification [63.0533733524078]
We introduce Generalized Long-Tailed classification (GLT) to jointly consider both kinds of imbalances.
We argue that most class-wise LT methods degenerate in our proposed two benchmarks: ImageNet-GLT and MSCOCO-GLT.
We propose an Invariant Feature Learning (IFL) method as the first strong baseline for GLT.
arXiv Detail & Related papers (2022-07-19T18:27:42Z) - Towards Understanding Sharpness-Aware Minimization [27.666483899332643]
We argue that the existing justifications for the success of Sharpness-Aware Minimization (SAM) are based on a PACBayes generalization.
We theoretically analyze its implicit bias for diagonal linear networks.
We show that fine-tuning a standard model with SAM can be shown significant improvements on the properties of non-sharp networks.
arXiv Detail & Related papers (2022-06-13T15:07:32Z) - Prototypical Classifier for Robust Class-Imbalanced Learning [64.96088324684683]
We propose textitPrototypical, which does not require fitting additional parameters given the embedding network.
Prototypical produces balanced and comparable predictions for all classes even though the training set is class-imbalanced.
We test our method on CIFAR-10LT, CIFAR-100LT and Webvision datasets, observing that Prototypical obtains substaintial improvements compared with state of the arts.
arXiv Detail & Related papers (2021-10-22T01:55:01Z) - Exploring Classification Equilibrium in Long-Tailed Object Detection [29.069986049436157]
We propose to use the mean classification score to indicate the classification accuracy for each category during training.
We balance the classification via an Equilibrium Loss (EBL) and a Memory-augmented Feature Sampling (MFS) method.
It improves the detection performance of tail classes by 15.6 AP, and outperforms the most recent long-tailed object detectors by more than 1 AP.
arXiv Detail & Related papers (2021-08-17T08:39:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.