Rolling the dice for better deep learning performance: A study of randomness techniques in deep neural networks
- URL: http://arxiv.org/abs/2404.03992v1
- Date: Fri, 5 Apr 2024 10:02:32 GMT
- Title: Rolling the dice for better deep learning performance: A study of randomness techniques in deep neural networks
- Authors: Mohammed Ghaith Altarabichi, SÅ‚awomir Nowaczyk, Sepideh Pashami, Peyman Sheikholharam Mashhadi, Julia Handl,
- Abstract summary: This paper investigates how various randomization techniques are used in Deep Neural Networks (DNNs)
It categorizes techniques into four types: adding noise to the loss function, masking random gradient updates, data augmentation and weight generalization.
The complete implementation and dataset are available on GitHub.
- Score: 4.643954670642798
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper investigates how various randomization techniques impact Deep Neural Networks (DNNs). Randomization, like weight noise and dropout, aids in reducing overfitting and enhancing generalization, but their interactions are poorly understood. The study categorizes randomness techniques into four types and proposes new methods: adding noise to the loss function and random masking of gradient updates. Using Particle Swarm Optimizer (PSO) for hyperparameter optimization, it explores optimal configurations across MNIST, FASHION-MNIST, CIFAR10, and CIFAR100 datasets. Over 30,000 configurations are evaluated, revealing data augmentation and weight initialization randomness as main performance contributors. Correlation analysis shows different optimizers prefer distinct randomization types. The complete implementation and dataset are available on GitHub.
Related papers
- Hyperparameter Optimization for Randomized Algorithms: A Case Study for Random Features [0.0]
This paper introduces a random objective function that is tailored to high-dimensions and robust to randomness in objective functions.
EKI is a gradient-free particle-based complexity that is scalable to high-dimensions and robust to randomness in objective functions.
arXiv Detail & Related papers (2024-06-30T04:15:03Z) - Adaptive Preference Scaling for Reinforcement Learning with Human Feedback [103.36048042664768]
Reinforcement learning from human feedback (RLHF) is a prevalent approach to align AI systems with human values.
We propose a novel adaptive preference loss, underpinned by distributionally robust optimization (DRO)
Our method is versatile and can be readily adapted to various preference optimization frameworks.
arXiv Detail & Related papers (2024-06-04T20:33:22Z) - Quantifying Inherent Randomness in Machine Learning Algorithms [7.591218883378448]
This paper uses an empirical study to examine the effects of randomness in model training and randomness in the partitioning of a dataset into training and test subsets.
We quantify and compare the magnitude of the variation in predictive performance for the following ML algorithms: Random Forests (RFs), Gradient Boosting Machines (GBMs), and Feedforward Neural Networks (FFNNs)
arXiv Detail & Related papers (2022-06-24T15:49:52Z) - Invariance Learning in Deep Neural Networks with Differentiable Laplace
Approximations [76.82124752950148]
We develop a convenient gradient-based method for selecting the data augmentation.
We use a differentiable Kronecker-factored Laplace approximation to the marginal likelihood as our objective.
arXiv Detail & Related papers (2022-02-22T02:51:11Z) - Scaling Structured Inference with Randomization [64.18063627155128]
We propose a family of dynamic programming (RDP) randomized for scaling structured models to tens of thousands of latent states.
Our method is widely applicable to classical DP-based inference.
It is also compatible with automatic differentiation so can be integrated with neural networks seamlessly.
arXiv Detail & Related papers (2021-12-07T11:26:41Z) - Local policy search with Bayesian optimization [73.0364959221845]
Reinforcement learning aims to find an optimal policy by interaction with an environment.
Policy gradients for local search are often obtained from random perturbations.
We develop an algorithm utilizing a probabilistic model of the objective function and its gradient.
arXiv Detail & Related papers (2021-06-22T16:07:02Z) - Improving Neural Network Training in Low Dimensional Random Bases [5.156484100374058]
We show that keeping the random projection fixed throughout training is detrimental to optimization.
We propose re-drawing the random subspace at each step, which yields significantly better performance.
We realize further improvements by applying independent projections to different parts of the network, making the approximation more efficient as network dimensionality grows.
arXiv Detail & Related papers (2020-11-09T19:50:19Z) - Bandit Samplers for Training Graph Neural Networks [63.17765191700203]
Several sampling algorithms with variance reduction have been proposed for accelerating the training of Graph Convolution Networks (GCNs)
These sampling algorithms are not applicable to more general graph neural networks (GNNs) where the message aggregator contains learned weights rather than fixed weights, such as Graph Attention Networks (GAT)
arXiv Detail & Related papers (2020-06-10T12:48:37Z) - MSE-Optimal Neural Network Initialization via Layer Fusion [68.72356718879428]
Deep neural networks achieve state-of-the-art performance for a range of classification and inference tasks.
The use of gradient combined nonvolutionity renders learning susceptible to novel problems.
We propose fusing neighboring layers of deeper networks that are trained with random variables.
arXiv Detail & Related papers (2020-01-28T18:25:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.