Learning Imbalanced Datasets with Maximum Margin Loss
- URL: http://arxiv.org/abs/2206.05380v1
- Date: Sat, 11 Jun 2022 00:21:41 GMT
- Title: Learning Imbalanced Datasets with Maximum Margin Loss
- Authors: Haeyong Kang, Thang Vu, and Chang D. Yoo
- Abstract summary: A learning algorithm referred to as Maximum Margin (MM) is proposed for considering the class-imbalance data learning issue.
We design a new Maximum Margin (MM) loss function, motivated by minimizing a margin-based generalization bound through the shifting decision bound.
- Score: 21.305656747991026
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A learning algorithm referred to as Maximum Margin (MM) is proposed for
considering the class-imbalance data learning issue: the trained model tends to
predict the majority of classes rather than the minority ones. That is,
underfitting for minority classes seems to be one of the challenges of
generalization. For a good generalization of the minority classes, we design a
new Maximum Margin (MM) loss function, motivated by minimizing a margin-based
generalization bound through the shifting decision bound. The
theoretically-principled label-distribution-aware margin (LDAM) loss was
successfully applied with prior strategies such as re-weighting or re-sampling
along with the effective training schedule. However, they did not investigate
the maximum margin loss function yet. In this study, we investigate the
performances of two types of hard maximum margin-based decision boundary shift
with LDAM's training schedule on artificially imbalanced CIFAR-10/100 for fair
comparisons and effectiveness.
Related papers
- A Unified Generalization Analysis of Re-Weighting and Logit-Adjustment
for Imbalanced Learning [129.63326990812234]
We propose a technique named data-dependent contraction to capture how modified losses handle different classes.
On top of this technique, a fine-grained generalization bound is established for imbalanced learning, which helps reveal the mystery of re-weighting and logit-adjustment.
arXiv Detail & Related papers (2023-10-07T09:15:08Z) - Easy Learning from Label Proportions [17.71834385754893]
Easyllp is a flexible and simple-to-implement debiasing approach based on aggregate labels.
Our technique allows us to accurately estimate the expected loss of an arbitrary model at an individual level.
arXiv Detail & Related papers (2023-02-06T20:41:38Z) - Margin-Based Few-Shot Class-Incremental Learning with Class-Level
Overfitting Mitigation [19.975435754433754]
Few-shot class-incremental learning (FSCIL) is designed to incrementally recognize novel classes with only few training samples.
A well known modification to the base-class training is to apply a margin to the base-class classification.
We propose a novel margin-based FSCIL method to mitigate the CO problem by providing the pattern learning process with extra constraint from the margin-based patterns themselves.
arXiv Detail & Related papers (2022-10-10T09:45:53Z) - Learning Towards the Largest Margins [83.7763875464011]
Loss function should promote the largest possible margins for both classes and samples.
Not only does this principled framework offer new perspectives to understand and interpret existing margin-based losses, but it can guide the design of new tools.
arXiv Detail & Related papers (2022-06-23T10:03:03Z) - ELM: Embedding and Logit Margins for Long-Tail Learning [70.19006872113862]
Long-tail learning is the problem of learning under skewed label distributions.
We present Embedding and Logit Margins (ELM), a unified approach to enforce margins in logit space.
The ELM method is shown to perform well empirically, and results in tighter tail class embeddings.
arXiv Detail & Related papers (2022-04-27T21:53:50Z) - Learning with Multiclass AUC: Theory and Algorithms [141.63211412386283]
Area under the ROC curve (AUC) is a well-known ranking metric for problems such as imbalanced learning and recommender systems.
In this paper, we start an early trial to consider the problem of learning multiclass scoring functions via optimizing multiclass AUC metrics.
arXiv Detail & Related papers (2021-07-28T05:18:10Z) - Distribution of Classification Margins: Are All Data Equal? [61.16681488656473]
We motivate theoretically and show empirically that the area under the curve of the margin distribution on the training set is in fact a good measure of generalization.
The resulting subset of "high capacity" features is not consistent across different training runs.
arXiv Detail & Related papers (2021-07-21T16:41:57Z) - Minimax Classification with 0-1 Loss and Performance Guarantees [4.812718493682455]
Supervised classification techniques use training samples to find classification rules with small expected 0-1 loss.
Conventional methods achieve efficient learning and out-of-sample generalization by minimizing surrogate losses over specific families of rules.
This paper presents minimax risk classifiers (MRCs) that do not rely on a choice of surrogate loss and family of rules.
arXiv Detail & Related papers (2020-10-15T18:11:28Z) - Negative Margin Matters: Understanding Margin in Few-shot Classification [72.85978953262004]
This paper introduces a negative margin loss to metric learning based few-shot learning methods.
The negative margin loss significantly outperforms regular softmax loss, and state-of-the-art accuracy on three standard few-shot classification benchmarks.
arXiv Detail & Related papers (2020-03-26T17:59:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.