Large Scale Private Learning via Low-rank Reparametrization
- URL: http://arxiv.org/abs/2106.09352v1
- Date: Thu, 17 Jun 2021 10:14:43 GMT
- Title: Large Scale Private Learning via Low-rank Reparametrization
- Authors: Da Yu, Huishuai Zhang, Wei Chen, Jian Yin, Tie-Yan Liu
- Abstract summary: We propose a reparametrization scheme to address the challenges of applying differentially private SGD on large neural networks.
We are the first able to apply differential privacy on the BERT model and achieve an average accuracy of $83.9%$ on four downstream tasks.
- Score: 77.38947817228656
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a reparametrization scheme to address the challenges of applying
differentially private SGD on large neural networks, which are 1) the huge
memory cost of storing individual gradients, 2) the added noise suffering
notorious dimensional dependence. Specifically, we reparametrize each weight
matrix with two \emph{gradient-carrier} matrices of small dimension and a
\emph{residual weight} matrix. We argue that such reparametrization keeps the
forward/backward process unchanged while enabling us to compute the projected
gradient without computing the gradient itself. To learn with differential
privacy, we design \emph{reparametrized gradient perturbation (RGP)} that
perturbs the gradients on gradient-carrier matrices and reconstructs an update
for the original weight from the noisy gradients. Importantly, we use
historical updates to find the gradient-carrier matrices, whose optimality is
rigorously justified under linear regression and empirically verified with deep
learning tasks. RGP significantly reduces the memory cost and improves the
utility. For example, we are the first able to apply differential privacy on
the BERT model and achieve an average accuracy of $83.9\%$ on four downstream
tasks with $\epsilon=8$, which is within $5\%$ loss compared to the non-private
baseline but enjoys much lower privacy leakage risk.
Related papers
- Winner-Take-All Column Row Sampling for Memory Efficient Adaptation of Language Model [89.8764435351222]
We propose a new family of unbiased estimators called WTA-CRS, for matrix production with reduced variance.
Our work provides both theoretical and experimental evidence that, in the context of tuning transformers, our proposed estimators exhibit lower variance compared to existing ones.
arXiv Detail & Related papers (2023-05-24T15:52:08Z) - Sketchy: Memory-efficient Adaptive Regularization with Frequent
Directions [22.09320263962004]
We find the spectra of the Kronecker-factored gradient covariance matrix in deep learning (DL) training tasks are concentrated on a small leading eigenspace.
We describe a generic method for reducing memory and compute requirements of maintaining a matrix preconditioner.
We show extensions of our work to Shampoo, resulting in a method competitive in quality with Shampoo and Adam, yet requiring only sub-linear memory for tracking second moments.
arXiv Detail & Related papers (2023-02-07T21:50:06Z) - M22: A Communication-Efficient Algorithm for Federated Learning Inspired
by Rate-Distortion [19.862336286338564]
In federated learning, model updates must be compressed so as to minimize the loss in accuracy resulting from a communication constraint.
This paper proposes emph$bf M$-magnitude weighted $L_bf 2$ distortion + $bf 2$ degrees of freedom'' (M22) algorithm, a rate-distortion inspired approach to gradient compression.
arXiv Detail & Related papers (2023-01-23T04:40:01Z) - Implicit Bias in Leaky ReLU Networks Trained on High-Dimensional Data [63.34506218832164]
In this work, we investigate the implicit bias of gradient flow and gradient descent in two-layer fully-connected neural networks with ReLU activations.
For gradient flow, we leverage recent work on the implicit bias for homogeneous neural networks to show that leakyally, gradient flow produces a neural network with rank at most two.
For gradient descent, provided the random variance is small enough, we show that a single step of gradient descent suffices to drastically reduce the rank of the network, and that the rank remains small throughout training.
arXiv Detail & Related papers (2022-10-13T15:09:54Z) - Scaling Forward Gradient With Local Losses [117.22685584919756]
Forward learning is a biologically plausible alternative to backprop for learning deep neural networks.
We show that it is possible to substantially reduce the variance of the forward gradient by applying perturbations to activations rather than weights.
Our approach matches backprop on MNIST and CIFAR-10 and significantly outperforms previously proposed backprop-free algorithms on ImageNet.
arXiv Detail & Related papers (2022-10-07T03:52:27Z) - Do Not Let Privacy Overbill Utility: Gradient Embedding Perturbation for
Private Learning [74.73901662374921]
A differentially private model degrades the utility drastically when the model comprises a large number of trainable parameters.
We propose an algorithm emphGradient Embedding Perturbation (GEP) towards training differentially private deep models with decent accuracy.
arXiv Detail & Related papers (2021-02-25T04:29:58Z) - Understanding Gradient Clipping in Private SGD: A Geometric Perspective [68.61254575987013]
Deep learning models are increasingly popular in many machine learning applications where the training data may contain sensitive information.
Many learning systems now incorporate differential privacy by training their models with (differentially) private SGD.
A key step in each private SGD update is gradient clipping that shrinks the gradient of an individual example whenever its L2 norm exceeds some threshold.
arXiv Detail & Related papers (2020-06-27T19:08:12Z) - The Impact of the Mini-batch Size on the Variance of Gradients in
Stochastic Gradient Descent [28.148743710421932]
The mini-batch gradient descent (SGD) algorithm is widely used in training machine learning models.
We study SGD dynamics under linear regression and two-layer linear networks, with an easy extension to deeper linear networks.
arXiv Detail & Related papers (2020-04-27T20:06:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.