Uses and Abuses of the Cross-Entropy Loss: Case Studies in Modern Deep
Learning
- URL: http://arxiv.org/abs/2011.05231v1
- Date: Tue, 10 Nov 2020 16:44:35 GMT
- Title: Uses and Abuses of the Cross-Entropy Loss: Case Studies in Modern Deep
Learning
- Authors: Elliott Gordon-Rodriguez, Gabriel Loaiza-Ganem, Geoff Pleiss, John P.
Cunningham
- Abstract summary: We focus on the use of the categorical cross-entropy loss to model data that is not strictly categorical, but rather takes values on the simplex.
This practice is standard in neural network architectures with label smoothing and actor-mimic reinforcement learning, amongst others.
We propose probabilistically-inspired alternatives to these models, providing an approach that is more principled and theoretically appealing.
- Score: 29.473503894240096
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modern deep learning is primarily an experimental science, in which empirical
advances occasionally come at the expense of probabilistic rigor. Here we focus
on one such example; namely the use of the categorical cross-entropy loss to
model data that is not strictly categorical, but rather takes values on the
simplex. This practice is standard in neural network architectures with label
smoothing and actor-mimic reinforcement learning, amongst others. Drawing on
the recently discovered continuous-categorical distribution, we propose
probabilistically-inspired alternatives to these models, providing an approach
that is more principled and theoretically appealing. Through careful
experimentation, including an ablation study, we identify the potential for
outperformance in these models, thereby highlighting the importance of a proper
probabilistic treatment, as well as illustrating some of the failure modes
thereof.
Related papers
- On the Dynamics Under the Unhinged Loss and Beyond [104.49565602940699]
We introduce the unhinged loss, a concise loss function, that offers more mathematical opportunities to analyze closed-form dynamics.
The unhinged loss allows for considering more practical techniques, such as time-vary learning rates and feature normalization.
arXiv Detail & Related papers (2023-12-13T02:11:07Z) - A Mathematical Framework for Learning Probability Distributions [0.0]
generative modeling and density estimation has become an immensely popular subject in recent years.
This paper provides a mathematical framework such that all the well-known models can be derived based on simple principles.
In particular, we prove that these models enjoy implicit regularization during training, so that the generalization error at early-stopping avoids the curse of dimensionality.
arXiv Detail & Related papers (2022-12-22T04:41:45Z) - Bias-inducing geometries: an exactly solvable data model with fairness
implications [13.690313475721094]
We introduce an exactly solvable high-dimensional model of data imbalance.
We analytically unpack the typical properties of learning models trained in this synthetic framework.
We obtain exact predictions for the observables that are commonly employed for fairness assessment.
arXiv Detail & Related papers (2022-05-31T16:27:57Z) - Last Layer Marginal Likelihood for Invariance Learning [12.00078928875924]
We introduce a new lower bound to the marginal likelihood, which allows us to perform inference for a larger class of likelihood functions.
We work towards bringing this approach to neural networks by using an architecture with a Gaussian process in the last layer.
arXiv Detail & Related papers (2021-06-14T15:40:51Z) - Explainable Adversarial Attacks in Deep Neural Networks Using Activation
Profiles [69.9674326582747]
This paper presents a visual framework to investigate neural network models subjected to adversarial examples.
We show how observing these elements can quickly pinpoint exploited areas in a model.
arXiv Detail & Related papers (2021-03-18T13:04:21Z) - On the Transferability of Adversarial Attacksagainst Neural Text
Classifier [121.6758865857686]
We investigate the transferability of adversarial examples for text classification models.
We propose a genetic algorithm to find an ensemble of models that can induce adversarial examples to fool almost all existing models.
We derive word replacement rules that can be used for model diagnostics from these adversarial examples.
arXiv Detail & Related papers (2020-11-17T10:45:05Z) - Understanding the Failure Modes of Out-of-Distribution Generalization [35.00563456450452]
Empirical studies suggest that machine learning models often rely on features, such as the background, that may be spuriously correlated with the label only during training time.
In this work, we identify the fundamental factors that give rise to this behavior, by explaining why models fail this way em even in easy-to-learn tasks.
arXiv Detail & Related papers (2020-10-29T17:19:03Z) - Good Classifiers are Abundant in the Interpolating Regime [64.72044662855612]
We develop a methodology to compute precisely the full distribution of test errors among interpolating classifiers.
We find that test errors tend to concentrate around a small typical value $varepsilon*$, which deviates substantially from the test error of worst-case interpolating model.
Our results show that the usual style of analysis in statistical learning theory may not be fine-grained enough to capture the good generalization performance observed in practice.
arXiv Detail & Related papers (2020-06-22T21:12:31Z) - Learning Diverse Representations for Fast Adaptation to Distribution
Shift [78.83747601814669]
We present a method for learning multiple models, incorporating an objective that pressures each to learn a distinct way to solve the task.
We demonstrate our framework's ability to facilitate rapid adaptation to distribution shift.
arXiv Detail & Related papers (2020-06-12T12:23:50Z) - Plausible Counterfactuals: Auditing Deep Learning Classifiers with
Realistic Adversarial Examples [84.8370546614042]
Black-box nature of Deep Learning models has posed unanswered questions about what they learn from data.
Generative Adversarial Network (GAN) and multi-objectives are used to furnish a plausible attack to the audited model.
Its utility is showcased within a human face classification task, unveiling the enormous potential of the proposed framework.
arXiv Detail & Related papers (2020-03-25T11:08:56Z) - The continuous categorical: a novel simplex-valued exponential family [23.983555024375306]
We show that standard choices for simplex-valued data suffer from a number of limitations, including bias and numerical issues.
We resolve these limitations by introducing a novel exponential family of distributions for modeling simplex-valued data.
Unlike the Dirichlet and other typical choices, the continuous categorical results in a well-behaved probabilistic loss function.
arXiv Detail & Related papers (2020-02-20T04:28:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.