Constrained Parameter Regularization
- URL: http://arxiv.org/abs/2311.09058v2
- Date: Wed, 6 Dec 2023 14:20:53 GMT
- Title: Constrained Parameter Regularization
- Authors: J\"org K.H. Franke, Michael Hefenbrock, Gregor Koehler, Frank Hutter
- Abstract summary: Regularization is a critical component in deep learning training.
We present constrained parameter regularization (CPR) as an alternative to traditional weight decay.
CPR counteracts the effects of grokking and consistently matches or outperforms traditional weight decay.
- Score: 41.055148686036176
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Regularization is a critical component in deep learning training, with weight
decay being a commonly used approach. It applies a constant penalty coefficient
uniformly across all parameters. This may be unnecessarily restrictive for some
parameters, while insufficiently restricting others. To dynamically adjust
penalty coefficients for different parameter groups, we present constrained
parameter regularization (CPR) as an alternative to traditional weight decay.
Instead of applying a single constant penalty to all parameters, we enforce an
upper bound on a statistical measure (e.g., the L$_2$-norm) of parameter
groups. Consequently, learning becomes a constraint optimization problem, which
we address by an adaptation of the augmented Lagrangian method. CPR only
requires two hyperparameters and incurs no measurable runtime overhead.
Additionally, we propose a simple but efficient mechanism to adapt the upper
bounds during the optimization. We provide empirical evidence of CPR's efficacy
in experiments on the "grokking" phenomenon, computer vision, and language
modeling tasks. Our results demonstrate that CPR counteracts the effects of
grokking and consistently matches or outperforms traditional weight decay.
Related papers
- Scaling Exponents Across Parameterizations and Optimizers [94.54718325264218]
We propose a new perspective on parameterization by investigating a key assumption in prior work.
Our empirical investigation includes tens of thousands of models trained with all combinations of threes.
We find that the best learning rate scaling prescription would often have been excluded by the assumptions in prior work.
arXiv Detail & Related papers (2024-07-08T12:32:51Z) - ETHER: Efficient Finetuning of Large-Scale Models with Hyperplane Reflections [59.839926875976225]
We propose the ETHER transformation family, which performs Efficient fineTuning via HypErplane Reflections.
In particular, we introduce ETHER and its relaxation ETHER+, which match or outperform existing PEFT methods with significantly fewer parameters.
arXiv Detail & Related papers (2024-05-30T17:26:02Z) - C-Learner: Constrained Learning for Causal Inference and Semiparametric Statistics [5.395560682099634]
We present a novel correction method that solves for the best plug-in estimator under the constraint that the first-order error of the estimator with respect to the nuisance parameter estimate is zero.
Our semi inference approach, which we call the "C-Learner", can be implemented with modern machine learning methods such as neural networks and tree ensembles.
arXiv Detail & Related papers (2024-05-15T16:38:28Z) - Towards Accurate Post-training Quantization for Reparameterized Models [6.158896686945439]
Current Post-training Quantization (PTQ) methods often lead to significant accuracy degradation.
This is primarily caused by channel-specific and sample-specific outliers.
We propose RepAPQ, a novel framework that preserves the accuracy of quantized reparameterization models.
arXiv Detail & Related papers (2024-02-25T15:42:12Z) - Sparse is Enough in Fine-tuning Pre-trained Large Language Models [98.46493578509039]
We propose a gradient-based sparse fine-tuning algorithm, named Sparse Increment Fine-Tuning (SIFT)
We validate its effectiveness on a range of tasks including the GLUE Benchmark and Instruction-tuning.
arXiv Detail & Related papers (2023-12-19T06:06:30Z) - IncreLoRA: Incremental Parameter Allocation Method for
Parameter-Efficient Fine-tuning [15.964205804768163]
IncreLoRA is an incremental parameter allocation method that adaptively adds trainable parameters during training.
We conduct extensive experiments on GLUE to demonstrate the effectiveness of IncreLoRA.
arXiv Detail & Related papers (2023-08-23T10:08:10Z) - AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning [143.23123791557245]
Fine-tuning large pre-trained language models on downstream tasks has become an important paradigm in NLP.
We propose AdaLoRA, which adaptively allocates the parameter budget among weight matrices according to their importance score.
We conduct extensive experiments with several pre-trained models on natural language processing, question answering, and natural language generation to validate the effectiveness of AdaLoRA.
arXiv Detail & Related papers (2023-03-18T22:36:25Z) - META-STORM: Generalized Fully-Adaptive Variance Reduced SGD for
Unbounded Functions [23.746620619512573]
Recent work overcomes the effect of having to compute gradients of "megabatches"
Work is widely used after update with competitive deep learning tasks.
arXiv Detail & Related papers (2022-09-29T15:12:54Z) - Adaptively Calibrated Critic Estimates for Deep Reinforcement Learning [36.643572071860554]
We propose a general method called Adaptively Calibrated Critics (ACC)
ACC uses the most recent high variance but unbiased on-policy rollouts to alleviate the bias of the low variance temporal difference targets.
We show that ACC is quite general by further applying it to TD3 and showing an improved performance also in this setting.
arXiv Detail & Related papers (2021-11-24T18:07:33Z) - Rethinking the Hyperparameters for Fine-tuning [78.15505286781293]
Fine-tuning from pre-trained ImageNet models has become the de-facto standard for various computer vision tasks.
Current practices for fine-tuning typically involve selecting an ad-hoc choice of hyper parameters.
This paper re-examines several common practices of setting hyper parameters for fine-tuning.
arXiv Detail & Related papers (2020-02-19T18:59:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.