Privately Fine-Tuning Large Language Models with Differential Privacy
- URL: http://arxiv.org/abs/2210.15042v3
- Date: Mon, 20 Mar 2023 01:33:23 GMT
- Title: Privately Fine-Tuning Large Language Models with Differential Privacy
- Authors: Rouzbeh Behnia, Mohamamdreza Ebrahimi, Jason Pacheco, Balaji
Padmanabhan
- Abstract summary: Pre-trained Large Language Models (LLMs) are an integral part of modern AI that have led to breakthrough performances in complex AI tasks.
Differential privacy (DP) provides a rigorous framework that allows adding noise in the process of training or fine-tuning LLMs.
We present ewtune, a DP framework for fine-tuning LLMs based on Edgeworth accountant with finite-sample privacy guarantees.
- Score: 10.485556506301549
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Pre-trained Large Language Models (LLMs) are an integral part of modern AI
that have led to breakthrough performances in complex AI tasks. Major AI
companies with expensive infrastructures are able to develop and train these
large models with billions and millions of parameters from scratch. Third
parties, researchers, and practitioners are increasingly adopting these
pre-trained models and fine-tuning them on their private data to accomplish
their downstream AI tasks. However, it has been shown that an adversary can
extract/reconstruct the exact training samples from these LLMs, which can lead
to revealing personally identifiable information. The issue has raised deep
concerns about the privacy of LLMs. Differential privacy (DP) provides a
rigorous framework that allows adding noise in the process of training or
fine-tuning LLMs such that extracting the training data becomes infeasible
(i.e., with a cryptographically small success probability). While the
theoretical privacy guarantees offered in most extant studies assume learning
models from scratch through many training iterations in an asymptotic setting,
this assumption does not hold in fine-tuning scenarios in which the number of
training iterations is significantly smaller. To address the gap, we present
\ewtune, a DP framework for fine-tuning LLMs based on Edgeworth accountant with
finite-sample privacy guarantees. Our results across four well-established
natural language understanding (NLU) tasks show that while \ewtune~adds privacy
guarantees to LLM fine-tuning process, it directly contributes to decreasing
the induced noise to up to 5.6\% and improves the state-of-the-art LLMs
performance by up to 1.1\% across all NLU tasks. We have open-sourced our
implementations for wide adoption and public testing purposes.
Related papers
- Forewarned is Forearmed: Leveraging LLMs for Data Synthesis through Failure-Inducing Exploration [90.41908331897639]
Large language models (LLMs) have significantly benefited from training on diverse, high-quality task-specific data.
We present a novel approach, ReverseGen, designed to automatically generate effective training samples.
arXiv Detail & Related papers (2024-10-22T06:43:28Z) - MoExtend: Tuning New Experts for Modality and Task Extension [61.29100693866109]
MoExtend is an effective framework designed to streamline the modality adaptation and extension of Mixture-of-Experts (MoE) models.
MoExtend seamlessly integrates new experts into pre-trained MoE models, endowing them with novel knowledge without the need to tune pretrained models.
arXiv Detail & Related papers (2024-08-07T02:28:37Z) - SELF-GUIDE: Better Task-Specific Instruction Following via Self-Synthetic Finetuning [70.21358720599821]
Large language models (LLMs) hold the promise of solving diverse tasks when provided with appropriate natural language prompts.
We propose SELF-GUIDE, a multi-stage mechanism in which we synthesize task-specific input-output pairs from the student LLM.
We report an absolute improvement of approximately 15% for classification tasks and 18% for generation tasks in the benchmark's metrics.
arXiv Detail & Related papers (2024-07-16T04:41:58Z) - Locally Differentially Private In-Context Learning [8.659575019965152]
Large pretrained language models (LLMs) have shown surprising In-Context Learning (ICL) ability.
This paper proposes a locally differentially private framework of in-context learning (LDP-ICL)
Considering the mechanisms of in-context learning in Transformers by gradient descent, we provide an analysis of the trade-off between privacy and utility in such LDP-ICL.
arXiv Detail & Related papers (2024-05-07T06:05:43Z) - Advancing the Robustness of Large Language Models through Self-Denoised Smoothing [50.54276872204319]
Large language models (LLMs) have achieved significant success, but their vulnerability to adversarial perturbations has raised considerable concerns.
We propose to leverage the multitasking nature of LLMs to first denoise the noisy inputs and then to make predictions based on these denoised versions.
Unlike previous denoised smoothing techniques in computer vision, which require training a separate model to enhance the robustness of LLMs, our method offers significantly better efficiency and flexibility.
arXiv Detail & Related papers (2024-04-18T15:47:00Z) - Self-Play Fine-Tuning Converts Weak Language Models to Strong Language Models [52.98743860365194]
We propose a new fine-tuning method called Self-Play fIne-tuNing (SPIN)
At the heart of SPIN lies a self-play mechanism, where the LLM refines its capability by playing against instances of itself.
This sheds light on the promise of self-play, enabling the achievement of human-level performance in LLMs without the need for expert opponents.
arXiv Detail & Related papers (2024-01-02T18:53:13Z) - TRACE: A Comprehensive Benchmark for Continual Learning in Large
Language Models [52.734140807634624]
Aligned large language models (LLMs) demonstrate exceptional capabilities in task-solving, following instructions, and ensuring safety.
Existing continual learning benchmarks lack sufficient challenge for leading aligned LLMs.
We introduce TRACE, a novel benchmark designed to evaluate continual learning in LLMs.
arXiv Detail & Related papers (2023-10-10T16:38:49Z) - Large Language Models as Data Preprocessors [9.99065004972981]
Large Language Models (LLMs) have marked a significant advancement in artificial intelligence.
This study explores their potential in data preprocessing, a critical stage in data mining and analytics applications.
We propose an LLM-based framework for data preprocessing, which integrates cutting-edge prompt engineering techniques.
arXiv Detail & Related papers (2023-08-30T23:28:43Z) - Small Language Models Improve Giants by Rewriting Their Outputs [18.025736098795296]
We tackle the problem of leveraging training data to improve the performance of large language models (LLMs) without fine-tuning.
We create a pool of candidates from the LLM through few-shot prompting and we employ a compact model, the LM-corrector (LMCor), specifically trained to merge these candidates to produce an enhanced output.
Experiments on four natural language generation tasks demonstrate that even a small LMCor model (250M) substantially improves the few-shot performance of LLMs (62B), matching and even outperforming standard fine-tuning.
arXiv Detail & Related papers (2023-05-22T22:07:50Z) - Differentially Private Decoding in Large Language Models [14.221692239892207]
We propose a simple, easy to interpret, and computationally lightweight perturbation mechanism to be applied to an already trained model at the decoding stage.
Our perturbation mechanism is model-agnostic and can be used in conjunction with any Large Language Model.
arXiv Detail & Related papers (2022-05-26T20:50:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.