Negative Margin Matters: Understanding Margin in Few-shot Classification
- URL: http://arxiv.org/abs/2003.12060v1
- Date: Thu, 26 Mar 2020 17:59:05 GMT
- Title: Negative Margin Matters: Understanding Margin in Few-shot Classification
- Authors: Bin Liu, Yue Cao, Yutong Lin, Qi Li, Zheng Zhang, Mingsheng Long, Han
Hu
- Abstract summary: This paper introduces a negative margin loss to metric learning based few-shot learning methods.
The negative margin loss significantly outperforms regular softmax loss, and state-of-the-art accuracy on three standard few-shot classification benchmarks.
- Score: 72.85978953262004
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper introduces a negative margin loss to metric learning based
few-shot learning methods. The negative margin loss significantly outperforms
regular softmax loss, and achieves state-of-the-art accuracy on three standard
few-shot classification benchmarks with few bells and whistles. These results
are contrary to the common practice in the metric learning field, that the
margin is zero or positive. To understand why the negative margin loss performs
well for the few-shot classification, we analyze the discriminability of
learned features w.r.t different margins for training and novel classes, both
empirically and theoretically. We find that although negative margin reduces
the feature discriminability for training classes, it may also avoid falsely
mapping samples of the same novel class to multiple peaks or clusters, and thus
benefit the discrimination of novel classes. Code is available at
https://github.com/bl0/negative-margin.few-shot.
Related papers
- Large Margin Discriminative Loss for Classification [3.3975558777609915]
We introduce a novel discriminative loss function with large margin in the context of Deep Learning.
This loss boosts the discriminative power of neural nets, represented by intra-class compactness and inter-class separability.
arXiv Detail & Related papers (2024-05-28T18:10:45Z) - Unified Binary and Multiclass Margin-Based Classification [27.28814893730265]
We show that a broad range of multiclass loss functions, including many popular ones, can be expressed in the relative margin form.
We then analyze the class of Fenchel-Young losses, and expand the set of these losses that are known to be classification-calibrated.
arXiv Detail & Related papers (2023-11-29T16:24:32Z) - Shrinking Class Space for Enhanced Certainty in Semi-Supervised Learning [59.44422468242455]
We propose a novel method dubbed ShrinkMatch to learn uncertain samples.
For each uncertain sample, it adaptively seeks a shrunk class space, which merely contains the original top-1 class.
We then impose a consistency regularization between a pair of strongly and weakly augmented samples in the shrunk space to strive for discriminative representations.
arXiv Detail & Related papers (2023-08-13T14:05:24Z) - Learning Towards the Largest Margins [83.7763875464011]
Loss function should promote the largest possible margins for both classes and samples.
Not only does this principled framework offer new perspectives to understand and interpret existing margin-based losses, but it can guide the design of new tools.
arXiv Detail & Related papers (2022-06-23T10:03:03Z) - Learning Imbalanced Datasets with Maximum Margin Loss [21.305656747991026]
A learning algorithm referred to as Maximum Margin (MM) is proposed for considering the class-imbalance data learning issue.
We design a new Maximum Margin (MM) loss function, motivated by minimizing a margin-based generalization bound through the shifting decision bound.
arXiv Detail & Related papers (2022-06-11T00:21:41Z) - ELM: Embedding and Logit Margins for Long-Tail Learning [70.19006872113862]
Long-tail learning is the problem of learning under skewed label distributions.
We present Embedding and Logit Margins (ELM), a unified approach to enforce margins in logit space.
The ELM method is shown to perform well empirically, and results in tighter tail class embeddings.
arXiv Detail & Related papers (2022-04-27T21:53:50Z) - MSE Loss with Outlying Label for Imbalanced Classification [10.305130700118399]
We propose mean squared error (MSE) loss with outlying label for class imbalanced classification.
MSE loss is possible to equalize the number of back propagation for all classes and to learn the feature space considering the relationships between classes as metric learning.
It is possible to create the feature space for separating high-difficulty classes and low-difficulty classes.
arXiv Detail & Related papers (2021-07-06T05:17:00Z) - Margin-Based Transfer Bounds for Meta Learning with Deep Feature
Embedding [67.09827634481712]
We leverage margin theory and statistical learning theory to establish three margin-based transfer bounds for meta-learning based multiclass classification (MLMC)
These bounds reveal that the expected error of a given classification algorithm for a future task can be estimated with the average empirical error on a finite number of previous tasks.
Experiments on three benchmarks show that these margin-based models still achieve competitive performance.
arXiv Detail & Related papers (2020-12-02T23:50:51Z) - Rethinking preventing class-collapsing in metric learning with
margin-based losses [81.22825616879936]
Metric learning seeks embeddings where visually similar instances are close and dissimilar instances are apart.
margin-based losses tend to project all samples of a class onto a single point in the embedding space.
We propose a simple modification to the embedding losses such that each sample selects its nearest same-class counterpart in a batch.
arXiv Detail & Related papers (2020-06-09T09:59:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.