Unlocking Tuning-free Generalization: Minimizing the PAC-Bayes Bound
with Trainable Priors
- URL: http://arxiv.org/abs/2305.19243v2
- Date: Sun, 1 Oct 2023 22:36:36 GMT
- Title: Unlocking Tuning-free Generalization: Minimizing the PAC-Bayes Bound
with Trainable Priors
- Authors: Xitong Zhang, Avrajit Ghosh, Guangliang Liu and Rongrong Wang
- Abstract summary: PAC-Bayes training framework is nearly tuning-free and requires no additional regularization.
Our proposed algorithm demonstrates the remarkable potential of PAC training to achieve state-of-the-art performance on deep neural networks.
- Score: 11.952542165016222
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: It is widely recognized that the generalization ability of neural networks
can be greatly enhanced through carefully designing the training procedure. The
current state-of-the-art training approach involves utilizing stochastic
gradient descent (SGD) or Adam optimization algorithms along with a combination
of additional regularization techniques such as weight decay, dropout, or noise
injection. Optimal generalization can only be achieved by tuning a multitude of
hyperparameters through grid search, which can be time-consuming and
necessitates additional validation datasets. To address this issue, we
introduce a practical PAC-Bayes training framework that is nearly tuning-free
and requires no additional regularization while achieving comparable testing
performance to that of SGD/Adam after a complete grid search and with extra
regularizations. Our proposed algorithm demonstrates the remarkable potential
of PAC training to achieve state-of-the-art performance on deep neural networks
with enhanced robustness and interpretability.
Related papers
- Function-Space Regularization in Neural Networks: A Probabilistic
Perspective [51.133793272222874]
We show that we can derive a well-motivated regularization technique that allows explicitly encoding information about desired predictive functions into neural network training.
We evaluate the utility of this regularization technique empirically and demonstrate that the proposed method leads to near-perfect semantic shift detection and highly-calibrated predictive uncertainty estimates.
arXiv Detail & Related papers (2023-12-28T17:50:56Z) - PAC-tuning:Fine-tuning Pretrained Language Models with PAC-driven
Perturbed Gradient Descent [11.866227238721939]
We propose a two-stage fine-tuning method, PAC-tuning, to address this optimization challenge.
PAC-tuning directly minimizes the PAC-Bayes bound to learn proper parameter distribution.
Second, PAC-tuning modifies the gradient by injecting noise with the variance learned in the first stage into the model parameters during training, resulting in a variant of perturbed descent.
arXiv Detail & Related papers (2023-10-26T17:09:13Z) - Achieving Constraints in Neural Networks: A Stochastic Augmented
Lagrangian Approach [49.1574468325115]
Regularizing Deep Neural Networks (DNNs) is essential for improving generalizability and preventing overfitting.
We propose a novel approach to DNN regularization by framing the training process as a constrained optimization problem.
We employ the Augmented Lagrangian (SAL) method to achieve a more flexible and efficient regularization mechanism.
arXiv Detail & Related papers (2023-10-25T13:55:35Z) - Split-Boost Neural Networks [1.1549572298362787]
We propose an innovative training strategy for feed-forward architectures - called split-boost.
Such a novel approach ultimately allows us to avoid explicitly modeling the regularization term.
The proposed strategy is tested on a real-world (anonymized) dataset within a benchmark medical insurance design problem.
arXiv Detail & Related papers (2023-09-06T17:08:57Z) - SING: A Plug-and-Play DNN Learning Technique [25.563053353709627]
We propose SING (StabIlized and Normalized Gradient), a plug-and-play technique that improves the stability and robustness of Adam(W)
SING is straightforward to implement and has minimal computational overhead, requiring only a layer-wise standardization of gradients fed to Adam(W)
arXiv Detail & Related papers (2023-05-25T12:39:45Z) - AskewSGD : An Annealed interval-constrained Optimisation method to train
Quantized Neural Networks [12.229154524476405]
We develop a new algorithm, Annealed Skewed SGD - AskewSGD - for training deep neural networks (DNNs) with quantized weights.
Unlike algorithms with active sets and feasible directions, AskewSGD avoids projections or optimization under the entire feasible set.
Experimental results show that the AskewSGD algorithm performs better than or on par with state of the art methods in classical benchmarks.
arXiv Detail & Related papers (2022-11-07T18:13:44Z) - Stochastic Training is Not Necessary for Generalization [57.04880404584737]
It is widely believed that the implicit regularization of gradient descent (SGD) is fundamental to the impressive generalization behavior we observe in neural networks.
In this work, we demonstrate that non-stochastic full-batch training can achieve strong performance on CIFAR-10 that is on-par with SGD.
arXiv Detail & Related papers (2021-09-29T00:50:00Z) - Better Training using Weight-Constrained Stochastic Dynamics [0.0]
We employ constraints to control the parameter space of deep neural networks throughout training.
The use of customized, appropriately designed constraints can reduce the vanishing/exploding problem.
We provide a general approach to efficiently incorporate constraints into a gradient Langevin framework.
arXiv Detail & Related papers (2021-06-20T14:41:06Z) - AdaS: Adaptive Scheduling of Stochastic Gradients [50.80697760166045]
We introduce the notions of textit"knowledge gain" and textit"mapping condition" and propose a new algorithm called Adaptive Scheduling (AdaS)
Experimentation reveals that, using the derived metrics, AdaS exhibits: (a) faster convergence and superior generalization over existing adaptive learning methods; and (b) lack of dependence on a validation set to determine when to stop training.
arXiv Detail & Related papers (2020-06-11T16:36:31Z) - Extrapolation for Large-batch Training in Deep Learning [72.61259487233214]
We show that a host of variations can be covered in a unified framework that we propose.
We prove the convergence of this novel scheme and rigorously evaluate its empirical performance on ResNet, LSTM, and Transformer.
arXiv Detail & Related papers (2020-06-10T08:22:41Z) - Large Batch Training Does Not Need Warmup [111.07680619360528]
Training deep neural networks using a large batch size has shown promising results and benefits many real-world applications.
In this paper, we propose a novel Complete Layer-wise Adaptive Rate Scaling (CLARS) algorithm for large-batch training.
Based on our analysis, we bridge the gap and illustrate the theoretical insights for three popular large-batch training techniques.
arXiv Detail & Related papers (2020-02-04T23:03:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.