PAC-tuning:Fine-tuning Pretrained Language Models with PAC-driven
Perturbed Gradient Descent
- URL: http://arxiv.org/abs/2310.17588v1
- Date: Thu, 26 Oct 2023 17:09:13 GMT
- Title: PAC-tuning:Fine-tuning Pretrained Language Models with PAC-driven
Perturbed Gradient Descent
- Authors: Guangliang Liu, Zhiyu Xue, Xitong Zhang, Kristen Marie Johnson and
Rongrong Wang
- Abstract summary: We propose a two-stage fine-tuning method, PAC-tuning, to address this optimization challenge.
PAC-tuning directly minimizes the PAC-Bayes bound to learn proper parameter distribution.
Second, PAC-tuning modifies the gradient by injecting noise with the variance learned in the first stage into the model parameters during training, resulting in a variant of perturbed descent.
- Score: 11.866227238721939
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fine-tuning pretrained language models (PLMs) for downstream tasks is a
large-scale optimization problem, in which the choice of the training algorithm
critically determines how well the trained model can generalize to unseen test
data, especially in the context of few-shot learning. To achieve good
generalization performance and avoid overfitting, techniques such as data
augmentation and pruning are often applied. However, adding these
regularizations necessitates heavy tuning of the hyperparameters of
optimization algorithms, such as the popular Adam optimizer. In this paper, we
propose a two-stage fine-tuning method, PAC-tuning, to address this
optimization challenge. First, based on PAC-Bayes training, PAC-tuning directly
minimizes the PAC-Bayes generalization bound to learn proper parameter
distribution. Second, PAC-tuning modifies the gradient by injecting noise with
the variance learned in the first stage into the model parameters during
training, resulting in a variant of perturbed gradient descent (PGD). In the
past, the few-shot scenario posed difficulties for PAC-Bayes training because
the PAC-Bayes bound, when applied to large models with limited training data,
might not be stringent. Our experimental results across 5 GLUE benchmark tasks
demonstrate that PAC-tuning successfully handles the challenges of fine-tuning
tasks and outperforms strong baseline methods by a visible margin, further
confirming the potential to apply PAC training for any other settings where the
Adam optimizer is currently used for training.
Related papers
- PACE: marrying generalization in PArameter-efficient fine-tuning with Consistency rEgularization [35.922096876707975]
PACE is a generalization of PArameter-efficient fine-tuning with Consistency rEgularization.
We show that PACE implicitly regularizes gradients for enhanced generalization, but also implicitly aligns the fine-tuned and pre-trained models to retain knowledge.
PACE outperforms existing PEFT methods in four visual adaptation tasks: VTAB-1k, FGVC, few-shot learning and domain adaptation.
arXiv Detail & Related papers (2024-09-25T17:56:00Z) - Denoising Pre-Training and Customized Prompt Learning for Efficient Multi-Behavior Sequential Recommendation [69.60321475454843]
We propose DPCPL, the first pre-training and prompt-tuning paradigm tailored for Multi-Behavior Sequential Recommendation.
In the pre-training stage, we propose a novel Efficient Behavior Miner (EBM) to filter out the noise at multiple time scales.
Subsequently, we propose to tune the pre-trained model in a highly efficient manner with the proposed Customized Prompt Learning (CPL) module.
arXiv Detail & Related papers (2024-08-21T06:48:38Z) - Sparse is Enough in Fine-tuning Pre-trained Large Language Models [98.46493578509039]
We propose a gradient-based sparse fine-tuning algorithm, named Sparse Increment Fine-Tuning (SIFT)
We validate its effectiveness on a range of tasks including the GLUE Benchmark and Instruction-tuning.
arXiv Detail & Related papers (2023-12-19T06:06:30Z) - Improving Generalization of Complex Models under Unbounded Loss Using PAC-Bayes Bounds [10.94126149188336]
PAC-Bayes learning theory has focused extensively on establishing tight upper bounds for test errors.
A recently proposed training procedure called PAC-Bayes training, updates the model toward minimizing these bounds.
This approach is theoretically sound, in practice, it has not achieved a test error as low as those obtained by empirical risk minimization (ERM)
We introduce a new PAC-Bayes training algorithm with improved performance and reduced reliance on prior tuning.
arXiv Detail & Related papers (2023-05-30T17:31:25Z) - Scalable PAC-Bayesian Meta-Learning via the PAC-Optimal Hyper-Posterior:
From Theory to Practice [54.03076395748459]
A central question in the meta-learning literature is how to regularize to ensure generalization to unseen tasks.
We present a generalization bound for meta-learning, which was first derived by Rothfuss et al.
We provide a theoretical analysis and empirical case study under which conditions and to what extent these guarantees for meta-learning improve upon PAC-Bayesian per-task learning bounds.
arXiv Detail & Related papers (2022-11-14T08:51:04Z) - Large Language Models Can Be Strong Differentially Private Learners [70.0317718115406]
Differentially Private (DP) learning has seen limited success for building large deep learning models of text.
We show that this performance drop can be mitigated with the use of large pretrained models.
We propose a memory saving technique that allows clipping in DP-SGD to run without instantiating per-example gradients.
arXiv Detail & Related papers (2021-10-12T01:45:27Z) - Extrapolation for Large-batch Training in Deep Learning [72.61259487233214]
We show that a host of variations can be covered in a unified framework that we propose.
We prove the convergence of this novel scheme and rigorously evaluate its empirical performance on ResNet, LSTM, and Transformer.
arXiv Detail & Related papers (2020-06-10T08:22:41Z) - PACOH: Bayes-Optimal Meta-Learning with PAC-Guarantees [77.67258935234403]
We provide a theoretical analysis using the PAC-Bayesian framework and derive novel generalization bounds for meta-learning.
We develop a class of PAC-optimal meta-learning algorithms with performance guarantees and a principled meta-level regularization.
arXiv Detail & Related papers (2020-02-13T15:01:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.