Margin-Based Regularization and Selective Sampling in Deep Neural
Networks
- URL: http://arxiv.org/abs/2009.06011v1
- Date: Sun, 13 Sep 2020 15:06:42 GMT
- Title: Margin-Based Regularization and Selective Sampling in Deep Neural
Networks
- Authors: Berry Weinstein, Shai Fine, Yacov Hel-Or
- Abstract summary: We derive a new margin-based regularization formulation, termed multi-margin regularization (MMR) for deep neural networks (DNNs)
We show improved empirical results on CIFAR10, CIFAR100 and ImageNet using state-of-the-art convolutional neural networks (CNNs) and BERT-BASE architecture for the MNLI, QQP, QNLI, MRPC, SST-2 and RTE benchmarks.
- Score: 7.219077740523683
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We derive a new margin-based regularization formulation, termed multi-margin
regularization (MMR), for deep neural networks (DNNs). The MMR is inspired by
principles that were applied in margin analysis of shallow linear classifiers,
e.g., support vector machine (SVM). Unlike SVM, MMR is continuously scaled by
the radius of the bounding sphere (i.e., the maximal norm of the feature vector
in the data), which is constantly changing during training. We empirically
demonstrate that by a simple supplement to the loss function, our method
achieves better results on various classification tasks across domains. Using
the same concept, we also derive a selective sampling scheme and demonstrate
accelerated training of DNNs by selecting samples according to a minimal margin
score (MMS). This score measures the minimal amount of displacement an input
should undergo until its predicted classification is switched. We evaluate our
proposed methods on three image classification tasks and six language text
classification tasks. Specifically, we show improved empirical results on
CIFAR10, CIFAR100 and ImageNet using state-of-the-art convolutional neural
networks (CNNs) and BERT-BASE architecture for the MNLI, QQP, QNLI, MRPC, SST-2
and RTE benchmarks.
Related papers
- Informed deep hierarchical classification: a non-standard analysis inspired approach [0.0]
It consists in a multi-output deep neural network equipped with specific projection operators placed before each output layer.
The design of such an architecture, called lexicographic hybrid deep neural network (LH-DNN), has been possible by combining tools from different and quite distant research fields.
To assess the efficacy of the approach, the resulting network is compared against the B-CNN, a convolutional neural network tailored for hierarchical classification tasks.
arXiv Detail & Related papers (2024-09-25T14:12:50Z) - Time Elastic Neural Networks [2.1756081703276]
We introduce and detail an atypical neural network architecture, called time elastic neural network (teNN)
The novelty compared to classical neural network architecture is that it explicitly incorporates time warping ability.
We demonstrate that, during the training process, the teNN succeeds in reducing the number of neurons required within each cell.
arXiv Detail & Related papers (2024-05-27T09:01:30Z) - Optimization Guarantees of Unfolded ISTA and ADMM Networks With Smooth
Soft-Thresholding [57.71603937699949]
We study optimization guarantees, i.e., achieving near-zero training loss with the increase in the number of learning epochs.
We show that the threshold on the number of training samples increases with the increase in the network width.
arXiv Detail & Related papers (2023-09-12T13:03:47Z) - Bayesian Neural Network Language Modeling for Speech Recognition [59.681758762712754]
State-of-the-art neural network language models (NNLMs) represented by long short term memory recurrent neural networks (LSTM-RNNs) and Transformers are becoming highly complex.
In this paper, an overarching full Bayesian learning framework is proposed to account for the underlying uncertainty in LSTM-RNN and Transformer LMs.
arXiv Detail & Related papers (2022-08-28T17:50:19Z) - A new perspective on probabilistic image modeling [92.89846887298852]
We present a new probabilistic approach for image modeling capable of density estimation, sampling and tractable inference.
DCGMMs can be trained end-to-end by SGD from random initial conditions, much like CNNs.
We show that DCGMMs compare favorably to several recent PC and SPN models in terms of inference, classification and sampling.
arXiv Detail & Related papers (2022-03-21T14:53:57Z) - Supervised Training of Siamese Spiking Neural Networks with Earth's
Mover Distance [4.047840018793636]
This study adapts the highly-versatile siamese neural network model to the event data domain.
We introduce a supervised training framework for optimizing Earth's Mover Distance between spike trains with spiking neural networks (SNN)
arXiv Detail & Related papers (2022-02-20T00:27:57Z) - Sequence Transduction with Graph-based Supervision [96.04967815520193]
We present a new transducer objective function that generalizes the RNN-T loss to accept a graph representation of the labels.
We demonstrate that transducer-based ASR with CTC-like lattice achieves better results compared to standard RNN-T.
arXiv Detail & Related papers (2021-11-01T21:51:42Z) - Multi-Sample Online Learning for Spiking Neural Networks based on
Generalized Expectation Maximization [42.125394498649015]
Spiking Neural Networks (SNNs) capture some of the efficiency of biological brains by processing through binary neural dynamic activations.
This paper proposes to leverage multiple compartments that sample independent spiking signals while sharing synaptic weights.
The key idea is to use these signals to obtain more accurate statistical estimates of the log-likelihood training criterion, as well as of its gradient.
arXiv Detail & Related papers (2021-02-05T16:39:42Z) - A Transductive Multi-Head Model for Cross-Domain Few-Shot Learning [72.30054522048553]
We present a new method, Transductive Multi-Head Few-Shot learning (TMHFS), to address the Cross-Domain Few-Shot Learning challenge.
The proposed methods greatly outperform the strong baseline, fine-tuning, on four different target domains.
arXiv Detail & Related papers (2020-06-08T02:39:59Z) - Classification of Hand Gestures from Wearable IMUs using Deep Neural
Network [0.0]
An Inertial Measurement Unit (IMU) consists of tri-axial accelerometers and gyroscopes which can together be used for formation analysis.
The paper presents a novel classification approach using a Deep Neural Network (DNN) for classifying hand gestures obtained from wearable IMU sensors.
arXiv Detail & Related papers (2020-04-27T01:08:33Z) - MSE-Optimal Neural Network Initialization via Layer Fusion [68.72356718879428]
Deep neural networks achieve state-of-the-art performance for a range of classification and inference tasks.
The use of gradient combined nonvolutionity renders learning susceptible to novel problems.
We propose fusing neighboring layers of deeper networks that are trained with random variables.
arXiv Detail & Related papers (2020-01-28T18:25:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.