Cross Entropy in Deep Learning of Classifiers Is Unnecessary -- ISBE
Error is All You Need
- URL: http://arxiv.org/abs/2311.16357v1
- Date: Mon, 27 Nov 2023 22:40:02 GMT
- Title: Cross Entropy in Deep Learning of Classifiers Is Unnecessary -- ISBE
Error is All You Need
- Authors: Wladyslaw Skarbek
- Abstract summary: In deep learning classifiers, the cost function usually takes the form of a combination of SoftMax and CrossEntropy functions.
This work introduces the ISBE functionality, justifying the thesis about the redundancy of cross entropy computation.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In deep learning classifiers, the cost function usually takes the form of a
combination of SoftMax and CrossEntropy functions. The SoftMax unit transforms
the scores predicted by the model network into assessments of the degree
(probabilities) of an object's membership to a given class. On the other hand,
CrossEntropy measures the divergence of this prediction from the distribution
of target scores. This work introduces the ISBE functionality, justifying the
thesis about the redundancy of cross entropy computation in deep learning of
classifiers. Not only can we omit the calculation of entropy, but also, during
back-propagation, there is no need to direct the error to the normalization
unit for its backward transformation. Instead, the error is sent directly to
the model's network. Using examples of perceptron and convolutional networks as
classifiers of images from the MNIST collection, it is observed for ISBE that
results are not degraded with SoftMax only, but also with other activation
functions such as Sigmoid, Tanh, or their hard variants HardSigmoid and
HardTanh. Moreover, up to three percent of time is saved within the total time
of forward and backward stages. The article is addressed mainly to programmers
and students interested in deep model learning. For example, it illustrates in
code snippets possible ways to implement ISBE units, but also formally proves
that the softmax trick only applies to the class of softmax functions with
relocations.
Related papers
- Accelerated zero-order SGD under high-order smoothness and overparameterized regime [79.85163929026146]
We present a novel gradient-free algorithm to solve convex optimization problems.
Such problems are encountered in medicine, physics, and machine learning.
We provide convergence guarantees for the proposed algorithm under both types of noise.
arXiv Detail & Related papers (2024-11-21T10:26:17Z) - Neural Collapse Inspired Feature-Classifier Alignment for Few-Shot Class
Incremental Learning [120.53458753007851]
Few-shot class-incremental learning (FSCIL) has been a challenging problem as only a few training samples are accessible for each novel class in the new sessions.
We deal with this misalignment dilemma in FSCIL inspired by the recently discovered phenomenon named neural collapse.
We propose a neural collapse inspired framework for FSCIL. Experiments on the miniImageNet, CUB-200, and CIFAR-100 datasets demonstrate that our proposed framework outperforms the state-of-the-art performances.
arXiv Detail & Related papers (2023-02-06T18:39:40Z) - Maximally Compact and Separated Features with Regular Polytope Networks [22.376196701232388]
We show how to extract from CNNs features the properties of emphmaximum inter-class separability and emphmaximum intra-class compactness.
We obtain features similar to what can be obtained with the well-known citewen2016discriminative and other similar approaches.
arXiv Detail & Related papers (2023-01-15T15:20:57Z) - Distinction Maximization Loss: Efficiently Improving Classification
Accuracy, Uncertainty Estimation, and Out-of-Distribution Detection Simply
Replacing the Loss and Calibrating [2.262407399039118]
We propose training deterministic deep neural networks using our DisMax loss.
DisMax usually outperforms all current approaches simultaneously in classification accuracy, uncertainty estimation, inference efficiency, and out-of-distribution detection.
arXiv Detail & Related papers (2022-05-12T04:37:35Z) - Do We Really Need a Learnable Classifier at the End of Deep Neural
Network? [118.18554882199676]
We study the potential of learning a neural network for classification with the classifier randomly as an ETF and fixed during training.
Our experimental results show that our method is able to achieve similar performances on image classification for balanced datasets.
arXiv Detail & Related papers (2022-03-17T04:34:28Z) - X-model: Improving Data Efficiency in Deep Learning with A Minimax Model [78.55482897452417]
We aim at improving data efficiency for both classification and regression setups in deep learning.
To take the power of both worlds, we propose a novel X-model.
X-model plays a minimax game between the feature extractor and task-specific heads.
arXiv Detail & Related papers (2021-10-09T13:56:48Z) - Robust Implicit Networks via Non-Euclidean Contractions [63.91638306025768]
Implicit neural networks show improved accuracy and significant reduction in memory consumption.
They can suffer from ill-posedness and convergence instability.
This paper provides a new framework to design well-posed and robust implicit neural networks.
arXiv Detail & Related papers (2021-06-06T18:05:02Z) - Query Training: Learning a Worse Model to Infer Better Marginals in
Undirected Graphical Models with Hidden Variables [11.985433487639403]
Probabilistic graphical models (PGMs) provide a compact representation of knowledge that can be queried in a flexible way.
We introduce query training (QT), a mechanism to learn a PGM that is optimized for the approximate inference algorithm that will be paired with it.
We demonstrate experimentally that QT can be used to learn a challenging 8-connected grid Markov random field with hidden variables.
arXiv Detail & Related papers (2020-06-11T20:34:32Z) - Aligned Cross Entropy for Non-Autoregressive Machine Translation [120.15069387374717]
We propose aligned cross entropy (AXE) as an alternative loss function for training of non-autoregressive models.
AXE-based training of conditional masked language models (CMLMs) substantially improves performance on major WMT benchmarks.
arXiv Detail & Related papers (2020-04-03T16:24:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.