Optimizing for ROC Curves on Class-Imbalanced Data by Training over a Family of Loss Functions
- URL: http://arxiv.org/abs/2402.05400v2
- Date: Tue, 4 Jun 2024 20:03:25 GMT
- Title: Optimizing for ROC Curves on Class-Imbalanced Data by Training over a Family of Loss Functions
- Authors: Kelsey Lieberman, Shuai Yuan, Swarna Kamlam Ravindran, Carlo Tomasi,
- Abstract summary: Training reliable classifiers under severe class imbalance is a challenging problem in computer vision.
Recent work has proposed techniques that mitigate the effects of training under imbalance by modifying the loss functions or optimization methods.
We propose training over a family of loss functions, instead of a single loss function.
- Score: 3.06506506650274
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Although binary classification is a well-studied problem in computer vision, training reliable classifiers under severe class imbalance remains a challenging problem. Recent work has proposed techniques that mitigate the effects of training under imbalance by modifying the loss functions or optimization methods. While this work has led to significant improvements in the overall accuracy in the multi-class case, we observe that slight changes in hyperparameter values of these methods can result in highly variable performance in terms of Receiver Operating Characteristic (ROC) curves on binary problems with severe imbalance. To reduce the sensitivity to hyperparameter choices and train more general models, we propose training over a family of loss functions, instead of a single loss function. We develop a method for applying Loss Conditional Training (LCT) to an imbalanced classification problem. Extensive experiment results, on both CIFAR and Kaggle competition datasets, show that our method improves model performance and is more robust to hyperparameter choices. Code is available at https://github.com/klieberman/roc_lct.
Related papers
- Training Over a Distribution of Hyperparameters for Enhanced Performance and Adaptability on Imbalanced Classification [3.06506506650274]
Conditional Loss Training (LCT) can be used to train reliable classifiers under severe class imbalance.
We show that LCT approximates the performance of several models and improves the overall performance of models on both CIFAR and real medical imaging applications.
arXiv Detail & Related papers (2024-10-04T16:47:11Z) - Simplifying Neural Network Training Under Class Imbalance [77.39968702907817]
Real-world datasets are often highly class-imbalanced, which can adversely impact the performance of deep learning models.
The majority of research on training neural networks under class imbalance has focused on specialized loss functions, sampling techniques, or two-stage training procedures.
We demonstrate that simply tuning existing components of standard deep learning pipelines, such as the batch size, data augmentation, and label smoothing, can achieve state-of-the-art performance without any such specialized class imbalance methods.
arXiv Detail & Related papers (2023-12-05T05:52:44Z) - Robust Learning with Progressive Data Expansion Against Spurious
Correlation [65.83104529677234]
We study the learning process of a two-layer nonlinear convolutional neural network in the presence of spurious features.
Our analysis suggests that imbalanced data groups and easily learnable spurious features can lead to the dominance of spurious features during the learning process.
We propose a new training algorithm called PDE that efficiently enhances the model's robustness for a better worst-group performance.
arXiv Detail & Related papers (2023-06-08T05:44:06Z) - Online Hyperparameter Optimization for Class-Incremental Learning [99.70569355681174]
Class-incremental learning (CIL) aims to train a classification model while the number of classes increases phase-by-phase.
An inherent challenge of CIL is the stability-plasticity tradeoff, i.e., CIL models should keep stable to retain old knowledge and keep plastic to absorb new knowledge.
We propose an online learning method that can adaptively optimize the tradeoff without knowing the setting as a priori.
arXiv Detail & Related papers (2023-01-11T17:58:51Z) - Xtreme Margin: A Tunable Loss Function for Binary Classification
Problems [0.0]
We provide an overview of a novel loss function, the Xtreme Margin loss function.
Unlike the binary cross-entropy and the hinge loss functions, this loss function provides researchers and practitioners flexibility with their training process.
arXiv Detail & Related papers (2022-10-31T22:39:32Z) - Phased Progressive Learning with Coupling-Regulation-Imbalance Loss for
Imbalanced Classification [11.673344551762822]
Deep neural networks generally perform poorly with datasets that suffer from quantity imbalance and classification difficulty imbalance between different classes.
A phased progressive learning schedule was proposed for smoothly transferring the training emphasis from representation learning to upper classifier training.
Our code will be open source soon.
arXiv Detail & Related papers (2022-05-24T14:46:39Z) - AutoBalance: Optimized Loss Functions for Imbalanced Data [38.64606886588534]
We propose AutoBalance, a bi-level optimization framework that automatically designs a training loss function to optimize a blend of accuracy and fairness-seeking objectives.
Specifically, a lower-level problem trains the model weights, and an upper-level problem tunes the loss function by monitoring and optimizing the desired objective over the validation data.
Our loss design enables personalized treatment for classes/groups by employing a parametric cross-entropy loss and individualized data augmentation schemes.
arXiv Detail & Related papers (2022-01-04T15:53:23Z) - Mitigating Dataset Imbalance via Joint Generation and Classification [17.57577266707809]
Supervised deep learning methods are enjoying enormous success in many practical applications of computer vision.
The marked performance degradation to biases and imbalanced data questions the reliability of these methods.
We introduce a joint dataset repairment strategy by combining a neural network classifier with Generative Adversarial Networks (GAN)
We show that the combined training helps to improve the robustness of both the classifier and the GAN against severe class imbalance.
arXiv Detail & Related papers (2020-08-12T18:40:38Z) - Dynamic R-CNN: Towards High Quality Object Detection via Dynamic
Training [70.2914594796002]
We propose Dynamic R-CNN to adjust the label assignment criteria and the shape of regression loss function.
Our method improves upon ResNet-50-FPN baseline with 1.9% AP and 5.5% AP$_90$ on the MS dataset with no extra overhead.
arXiv Detail & Related papers (2020-04-13T15:20:25Z) - Learning Adaptive Loss for Robust Learning with Noisy Labels [59.06189240645958]
Robust loss is an important strategy for handling robust learning issue.
We propose a meta-learning method capable of robust hyper tuning.
Four kinds of SOTA loss functions are attempted to be minimization, general availability and effectiveness.
arXiv Detail & Related papers (2020-02-16T00:53:37Z) - Identifying and Compensating for Feature Deviation in Imbalanced Deep
Learning [59.65752299209042]
We investigate learning a ConvNet under such a scenario.
We found that a ConvNet significantly over-fits the minor classes.
We propose to incorporate class-dependent temperatures (CDT) training ConvNet.
arXiv Detail & Related papers (2020-01-06T03:52:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.