Effectiveness of Data Augmentation for Parameter Efficient Tuning with
Limited Data
- URL: http://arxiv.org/abs/2303.02577v2
- Date: Thu, 29 Jun 2023 06:13:01 GMT
- Title: Effectiveness of Data Augmentation for Parameter Efficient Tuning with
Limited Data
- Authors: Stephen Obadinma, Hongyu Guo, Xiaodan Zhu
- Abstract summary: We show that data augmentation can be used to boost the performance of P-tuning and LoRA models.
We show how P-tuning presents a more limited ability to separate the sentence embeddings from different classes of augmented data.
- Score: 30.869230680173825
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent work has demonstrated that using parameter efficient tuning techniques
such as prefix tuning (or P-tuning) on pretrained language models can yield
performance that is comparable or superior to fine-tuning while dramatically
reducing trainable parameters. Nevertheless, the effectiveness of such methods
under the context of data augmentation, a common strategy to improve learning
under low data regimes, has not been fully explored. In this paper, we examine
the effectiveness of several popular task-agnostic data augmentation
techniques, i.e., EDA, Back Translation, and Mixup, when using two general
parameter efficient tuning methods, P-tuning v2 and LoRA, under data scarcity.
We show that data augmentation can be used to boost the performance of P-tuning
and LoRA models, but the effectiveness of each technique varies and certain
methods can lead to a notable degradation in performance, particularly when
using larger models and on harder tasks. We further analyze the sentence
representations of P-tuning compared to fine-tuning to help understand the
above behaviour, and reveal how P-tuning generally presents a more limited
ability to separate the sentence embeddings from different classes of augmented
data. In addition, it displays poorer performance on heavily altered data.
However, we demonstrate that by adding a simple contrastive loss function it
can help mitigate such issues for prefix tuning, resulting in sizable
improvements to augmented data performance.
Related papers
- DELIFT: Data Efficient Language model Instruction Fine Tuning [13.538140114667772]
We introduce DELIFT, a novel algorithm that systematically optimize data selection across the three key stages of fine-tuning.
Experiments across various tasks and model scales demonstrate that DELIFT can reduce the fine-tuning data size by up to 70% without compromising performance.
arXiv Detail & Related papers (2024-11-07T04:38:29Z) - Visual Fourier Prompt Tuning [63.66866445034855]
We propose the Visual Fourier Prompt Tuning (VFPT) method as a general and effective solution for adapting large-scale transformer-based models.
Our approach incorporates the Fast Fourier Transform into prompt embeddings and harmoniously considers both spatial and frequency domain information.
Our results demonstrate that our approach outperforms current state-of-the-art baselines on two benchmarks.
arXiv Detail & Related papers (2024-11-02T18:18:35Z) - Data Augmentation for Traffic Classification [54.92823760790628]
Data Augmentation (DA) is a technique widely adopted in Computer Vision (CV) and Natural Language Processing (NLP) tasks.
DA has struggled to gain traction in networking contexts, particularly in Traffic Classification (TC) tasks.
arXiv Detail & Related papers (2024-01-19T15:25:09Z) - Efficient Grammatical Error Correction Via Multi-Task Training and
Optimized Training Schedule [55.08778142798106]
We propose auxiliary tasks that exploit the alignment between the original and corrected sentences.
We formulate each task as a sequence-to-sequence problem and perform multi-task training.
We find that the order of datasets used for training and even individual instances within a dataset may have important effects on the final performance.
arXiv Detail & Related papers (2023-11-20T14:50:12Z) - SLoRA: Federated Parameter Efficient Fine-Tuning of Language Models [28.764782216513037]
Federated Learning (FL) can benefit from distributed and private data of the FL edge clients for fine-tuning.
We propose a method called SLoRA, which overcomes the key limitations of LoRA in high heterogeneous data scenarios.
Our experimental results demonstrate that SLoRA achieves performance comparable to full fine-tuning.
arXiv Detail & Related papers (2023-08-12T10:33:57Z) - E^2VPT: An Effective and Efficient Approach for Visual Prompt Tuning [55.50908600818483]
Fine-tuning large-scale pretrained vision models for new tasks has become increasingly parameter-intensive.
We propose an Effective and Efficient Visual Prompt Tuning (E2VPT) approach for large-scale transformer-based model adaptation.
Our approach outperforms several state-of-the-art baselines on two benchmarks.
arXiv Detail & Related papers (2023-07-25T19:03:21Z) - An Empirical Analysis of Parameter-Efficient Methods for Debiasing
Pre-Trained Language Models [55.14405248920852]
We conduct experiments with prefix tuning, prompt tuning, and adapter tuning on different language models and bias types to evaluate their debiasing performance.
We find that the parameter-efficient methods are effective in mitigating gender bias, where adapter tuning is consistently the most effective.
We also find that prompt tuning is more suitable for GPT-2 than BERT, and racial and religious bias is less effective when it comes to racial and religious bias.
arXiv Detail & Related papers (2023-06-06T23:56:18Z) - Data Augmentation Strategies for Improving Sequential Recommender
Systems [7.986899327513767]
Sequential recommender systems have recently achieved significant performance improvements with the exploitation of deep learning (DL) based methods.
We propose a set of data augmentation strategies, all of which transform original item sequences in the way of direct corruption.
Experiments on the latest DL-based model show that applying data augmentation can help the model generalize better.
arXiv Detail & Related papers (2022-03-26T09:58:14Z) - Extrapolation for Large-batch Training in Deep Learning [72.61259487233214]
We show that a host of variations can be covered in a unified framework that we propose.
We prove the convergence of this novel scheme and rigorously evaluate its empirical performance on ResNet, LSTM, and Transformer.
arXiv Detail & Related papers (2020-06-10T08:22:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.