Continual Learning with Differential Privacy
- URL: http://arxiv.org/abs/2110.05223v1
- Date: Mon, 11 Oct 2021 12:39:55 GMT
- Title: Continual Learning with Differential Privacy
- Authors: Pradnya Desai, Phung Lai, NhatHai Phan, and My T. Thai
- Abstract summary: We introduce a notion of continual adjacent databases to bound the sensitivity of any data record participating in the training process of continual learning.
We develop a new DP-preserving algorithm for CL with a data sampling strategy to quantify the privacy risk of training data.
Our algorithm provides formal guarantees of privacy for data records across tasks in CL.
- Score: 19.186539487598385
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In this paper, we focus on preserving differential privacy (DP) in continual
learning (CL), in which we train ML models to learn a sequence of new tasks
while memorizing previous tasks. We first introduce a notion of continual
adjacent databases to bound the sensitivity of any data record participating in
the training process of CL. Based upon that, we develop a new DP-preserving
algorithm for CL with a data sampling strategy to quantify the privacy risk of
training data in the well-known Averaged Gradient Episodic Memory (A-GEM)
approach by applying a moments accountant. Our algorithm provides formal
guarantees of privacy for data records across tasks in CL. Preliminary
theoretical analysis and evaluations show that our mechanism tightens the
privacy loss while maintaining a promising model utility.
Related papers
- Pseudo-Probability Unlearning: Towards Efficient and Privacy-Preserving Machine Unlearning [59.29849532966454]
We propose PseudoProbability Unlearning (PPU), a novel method that enables models to forget data to adhere to privacy-preserving manner.
Our method achieves over 20% improvements in forgetting error compared to the state-of-the-art.
arXiv Detail & Related papers (2024-11-04T21:27:06Z) - Towards Split Learning-based Privacy-Preserving Record Linkage [49.1574468325115]
Split Learning has been introduced to facilitate applications where user data privacy is a requirement.
In this paper, we investigate the potentials of Split Learning for Privacy-Preserving Record Matching.
arXiv Detail & Related papers (2024-09-02T09:17:05Z) - ReCaLL: Membership Inference via Relative Conditional Log-Likelihoods [56.073335779595475]
We propose ReCaLL (Relative Conditional Log-Likelihood), a novel membership inference attack (MIA)
ReCaLL examines the relative change in conditional log-likelihoods when prefixing target data points with non-member context.
We conduct comprehensive experiments and show that ReCaLL achieves state-of-the-art performance on the WikiMIA dataset.
arXiv Detail & Related papers (2024-06-23T00:23:13Z) - Shuffled Differentially Private Federated Learning for Time Series Data
Analytics [10.198481976376717]
We develop a privacy-preserving federated learning algorithm for time series data.
Specifically, we employ local differential privacy to extend the privacy protection trust boundary to the clients.
We also incorporate shuffle techniques to achieve a privacy amplification, mitigating the accuracy decline caused by leveraging local differential privacy.
arXiv Detail & Related papers (2023-07-30T10:30:38Z) - Detecting Morphing Attacks via Continual Incremental Training [10.796380524798744]
Recent Continual Learning (CL) paradigm may represent an effective solution to enable incremental training, even through multiple sites.
We investigate the performance of different Continual Learning methods in this scenario, simulating a learning model that is updated every time a new chunk of data, even of variable size, is available.
Experimental results reveal that a particular CL method, namely Learning without Forgetting (LwF), is one of the best-performing algorithms.
arXiv Detail & Related papers (2023-07-27T17:48:29Z) - Safeguarding Data in Multimodal AI: A Differentially Private Approach to
CLIP Training [15.928338716118697]
We introduce a differentially private adaptation of the Contrastive Language-Image Pretraining (CLIP) model.
Our proposed method, Dp-CLIP, is rigorously evaluated on benchmark datasets.
arXiv Detail & Related papers (2023-06-13T23:32:09Z) - Considerations on the Theory of Training Models with Differential
Privacy [13.782477759025344]
In federated learning collaborative learning takes place by a set of clients who each want to remain in control of how their local training data is used.
Differential privacy is one method to limit privacy leakage.
arXiv Detail & Related papers (2023-03-08T15:56:27Z) - A Memory Transformer Network for Incremental Learning [64.0410375349852]
We study class-incremental learning, a training setup in which new classes of data are observed over time for the model to learn from.
Despite the straightforward problem formulation, the naive application of classification models to class-incremental learning results in the "catastrophic forgetting" of previously seen classes.
One of the most successful existing methods has been the use of a memory of exemplars, which overcomes the issue of catastrophic forgetting by saving a subset of past data into a memory bank and utilizing it to prevent forgetting when training future tasks.
arXiv Detail & Related papers (2022-10-10T08:27:28Z) - Differentially Private Stochastic Gradient Descent with Low-Noise [49.981789906200035]
Modern machine learning algorithms aim to extract fine-grained information from data to provide accurate predictions, which often conflicts with the goal of privacy protection.
This paper addresses the practical and theoretical importance of developing privacy-preserving machine learning algorithms that ensure good performance while preserving privacy.
arXiv Detail & Related papers (2022-09-09T08:54:13Z) - ORDisCo: Effective and Efficient Usage of Incremental Unlabeled Data for
Semi-supervised Continual Learning [52.831894583501395]
Continual learning assumes the incoming data are fully labeled, which might not be applicable in real applications.
We propose deep Online Replay with Discriminator Consistency (ORDisCo) to interdependently learn a classifier with a conditional generative adversarial network (GAN)
We show ORDisCo achieves significant performance improvement on various semi-supervised learning benchmark datasets for SSCL.
arXiv Detail & Related papers (2021-01-02T09:04:14Z) - Stratified cross-validation for unbiased and privacy-preserving
federated learning [0.0]
We focus on the recurrent problem of duplicated records that, if not handled properly, may cause over-optimistic estimations of a model's performances.
We introduce and discuss stratified cross-validation, a validation methodology that leverages stratification techniques to prevent data leakage in federated learning settings.
arXiv Detail & Related papers (2020-01-22T15:49:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.