Asymmetric Loss Functions for Learning with Noisy Labels
- URL: http://arxiv.org/abs/2106.03110v1
- Date: Sun, 6 Jun 2021 12:52:48 GMT
- Title: Asymmetric Loss Functions for Learning with Noisy Labels
- Authors: Xiong Zhou, Xianming Liu, Junjun Jiang, Xin Gao, Xiangyang Ji
- Abstract summary: We propose a new class of loss functions, namely textitasymmetric loss functions, which are robust to learning with noisy labels for various types of noise.
Experimental results on benchmark datasets demonstrate that asymmetric loss functions can outperform state-of-the-art methods.
- Score: 82.50250230688388
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Robust loss functions are essential for training deep neural networks with
better generalization power in the presence of noisy labels. Symmetric loss
functions are confirmed to be robust to label noise. However, the symmetric
condition is overly restrictive. In this work, we propose a new class of loss
functions, namely \textit{asymmetric loss functions}, which are robust to
learning with noisy labels for various types of noise. We investigate general
theoretical properties of asymmetric loss functions, including classification
calibration, excess risk bound, and noise tolerance. Meanwhile, we introduce
the asymmetry ratio to measure the asymmetry of a loss function. The empirical
results show that a higher ratio would provide better noise tolerance.
Moreover, we modify several commonly-used loss functions and establish the
necessary and sufficient conditions for them to be asymmetric. Experimental
results on benchmark datasets demonstrate that asymmetric loss functions can
outperform state-of-the-art methods. The code is available at
\href{https://github.com/hitcszx/ALFs}{https://github.com/hitcszx/ALFs}
Related papers
- EnsLoss: Stochastic Calibrated Loss Ensembles for Preventing Overfitting in Classification [1.3778851745408134]
We propose a novel ensemble method, namely EnsLoss, to combine loss functions within the Empirical risk minimization framework.
We first transform the CC conditions of losses into loss-derivatives, thereby bypassing the need for explicit loss functions.
We theoretically establish the statistical consistency of our approach and provide insights into its benefits.
arXiv Detail & Related papers (2024-09-02T02:40:42Z) - Learning Layer-wise Equivariances Automatically using Gradients [66.81218780702125]
Convolutions encode equivariance symmetries into neural networks leading to better generalisation performance.
symmetries provide fixed hard constraints on the functions a network can represent, need to be specified in advance, and can not be adapted.
Our goal is to allow flexible symmetry constraints that can automatically be learned from data using gradients.
arXiv Detail & Related papers (2023-10-09T20:22:43Z) - Noise-Robust Loss Functions: Enhancing Bounded Losses for Large-Scale Noisy Data Learning [0.0]
Large annotated datasets inevitably contain noisy labels, which poses a major challenge for training deep neural networks as they easily memorize the labels.
Noise-robust loss functions have emerged as a notable strategy to counteract this issue, but it remains challenging to create a robust loss function which is not susceptible to underfitting.
We propose a novel method denoted as logit bias, which adds a real number $epsilon$ to the logit at the position of the correct class.
arXiv Detail & Related papers (2023-06-08T18:38:55Z) - Robust T-Loss for Medical Image Segmentation [56.524774292536264]
This paper presents a new robust loss function, the T-Loss, for medical image segmentation.
The proposed loss is based on the negative log-likelihood of the Student-t distribution and can effectively handle outliers in the data.
Our experiments show that the T-Loss outperforms traditional loss functions in terms of dice scores on two public medical datasets.
arXiv Detail & Related papers (2023-06-01T14:49:40Z) - Do We Need to Penalize Variance of Losses for Learning with Label Noise? [91.38888889609002]
We find that the variance should be increased for the problem of learning with noisy labels.
By exploiting the label noise transition matrix, regularizers can be easily designed to reduce the variance of losses.
Empirically, the proposed method by increasing the variance of losses significantly improves the generalization ability of baselines on both synthetic and real-world datasets.
arXiv Detail & Related papers (2022-01-30T06:19:08Z) - Learning with Noisy Labels via Sparse Regularization [76.31104997491695]
Learning with noisy labels is an important task for training accurate deep neural networks.
Some commonly-used loss functions, such as Cross Entropy (CE), suffer from severe overfitting to noisy labels.
We introduce the sparse regularization strategy to approximate the one-hot constraint.
arXiv Detail & Related papers (2021-07-31T09:40:23Z) - An Exploration into why Output Regularization Mitigates Label Noise [0.0]
Noise robust losses is one of the more promising approaches for dealing with label noise.
We show that losses that incorporate an output regularization term, such as label smoothing and entropy, become symmetric as the regularization coefficient goes to infinity.
arXiv Detail & Related papers (2021-04-26T11:16:30Z) - A Symmetric Loss Perspective of Reliable Machine Learning [87.68601212686086]
We review how a symmetric loss can yield robust classification from corrupted labels in balanced error rate (BER) minimization.
We demonstrate how the robust AUC method can benefit natural language processing in the problem where we want to learn only from relevant keywords.
arXiv Detail & Related papers (2021-01-05T06:25:47Z) - Normalized Loss Functions for Deep Learning with Noisy Labels [39.32101898670049]
We show that the commonly used Cross Entropy (CE) loss is not robust to noisy labels.
We propose a framework to build robust loss functions called Active Passive Loss (APL)
arXiv Detail & Related papers (2020-06-24T08:25:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.