RecUP-FL: Reconciling Utility and Privacy in Federated Learning via
User-configurable Privacy Defense
- URL: http://arxiv.org/abs/2304.05135v1
- Date: Tue, 11 Apr 2023 10:59:45 GMT
- Title: RecUP-FL: Reconciling Utility and Privacy in Federated Learning via
User-configurable Privacy Defense
- Authors: Yue Cui, Syed Irfan Ali Meerza, Zhuohang Li, Luyang Liu, Jiaxin Zhang,
Jian Liu
- Abstract summary: Federated learning (FL) allows clients to collaboratively train a model without sharing their private data.
Recent studies have shown that private information can still be leaked through shared gradients.
We propose a user-configurable privacy defense, RecUP-FL, that can better focus on the user-specified sensitive attributes.
- Score: 9.806681555309519
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning (FL) provides a variety of privacy advantages by allowing
clients to collaboratively train a model without sharing their private data.
However, recent studies have shown that private information can still be leaked
through shared gradients. To further minimize the risk of privacy leakage,
existing defenses usually require clients to locally modify their gradients
(e.g., differential privacy) prior to sharing with the server. While these
approaches are effective in certain cases, they regard the entire data as a
single entity to protect, which usually comes at a large cost in model utility.
In this paper, we seek to reconcile utility and privacy in FL by proposing a
user-configurable privacy defense, RecUP-FL, that can better focus on the
user-specified sensitive attributes while obtaining significant improvements in
utility over traditional defenses. Moreover, we observe that existing inference
attacks often rely on a machine learning model to extract the private
information (e.g., attributes). We thus formulate such a privacy defense as an
adversarial learning problem, where RecUP-FL generates slight perturbations
that can be added to the gradients before sharing to fool adversary models. To
improve the transferability to un-queryable black-box adversary models,
inspired by the idea of meta-learning, RecUP-FL forms a model zoo containing a
set of substitute models and iteratively alternates between simulations of the
white-box and the black-box adversarial attack scenarios to generate
perturbations. Extensive experiments on four datasets under various adversarial
settings (both attribute inference attack and data reconstruction attack) show
that RecUP-FL can meet user-specified privacy constraints over the sensitive
attributes while significantly improving the model utility compared with
state-of-the-art privacy defenses.
Related papers
- No Vandalism: Privacy-Preserving and Byzantine-Robust Federated Learning [18.1129191782913]
Federated learning allows several clients to train one machine learning model jointly without sharing private data, providing privacy protection.
Traditional federated learning is vulnerable to poisoning attacks, which can not only decrease the model performance, but also implant malicious backdoors.
In this paper, we aim to build a privacy-preserving and Byzantine-robust federated learning scheme to provide an environment with no vandalism (NoV) against attacks from malicious participants.
arXiv Detail & Related papers (2024-06-03T07:59:10Z) - Ungeneralizable Examples [70.76487163068109]
Current approaches to creating unlearnable data involve incorporating small, specially designed noises.
We extend the concept of unlearnable data to conditional data learnability and introduce textbfUntextbfGeneralizable textbfExamples (UGEs)
UGEs exhibit learnability for authorized users while maintaining unlearnability for potential hackers.
arXiv Detail & Related papers (2024-04-22T09:29:14Z) - Privacy Backdoors: Enhancing Membership Inference through Poisoning Pre-trained Models [112.48136829374741]
In this paper, we unveil a new vulnerability: the privacy backdoor attack.
When a victim fine-tunes a backdoored model, their training data will be leaked at a significantly higher rate than if they had fine-tuned a typical model.
Our findings highlight a critical privacy concern within the machine learning community and call for a reevaluation of safety protocols in the use of open-source pre-trained models.
arXiv Detail & Related papers (2024-04-01T16:50:54Z) - PrivacyMind: Large Language Models Can Be Contextual Privacy Protection Learners [81.571305826793]
We introduce Contextual Privacy Protection Language Models (PrivacyMind)
Our work offers a theoretical analysis for model design and benchmarks various techniques.
In particular, instruction tuning with both positive and negative examples stands out as a promising method.
arXiv Detail & Related papers (2023-10-03T22:37:01Z) - Can Language Models be Instructed to Protect Personal Information? [30.187731765653428]
We introduce PrivQA -- a benchmark to assess the privacy/utility trade-off when a model is instructed to protect specific categories of personal information in a simulated scenario.
We find that adversaries can easily circumvent these protections with simple jailbreaking methods through textual and/or image inputs.
We believe PrivQA has the potential to support the development of new models with improved privacy protections, as well as the adversarial robustness of these protections.
arXiv Detail & Related papers (2023-10-03T17:30:33Z) - Client-specific Property Inference against Secure Aggregation in
Federated Learning [52.8564467292226]
Federated learning has become a widely used paradigm for collaboratively training a common model among different participants.
Many attacks have shown that it is still possible to infer sensitive information such as membership, property, or outright reconstruction of participant data.
We show that simple linear models can effectively capture client-specific properties only from the aggregated model updates.
arXiv Detail & Related papers (2023-03-07T14:11:01Z) - Group privacy for personalized federated learning [4.30484058393522]
Federated learning is a type of collaborative machine learning, where participating clients process their data locally, sharing only updates to the collaborative model.
We propose a method to provide group privacy guarantees exploiting some key properties of $d$-privacy.
arXiv Detail & Related papers (2022-06-07T15:43:45Z) - Just Fine-tune Twice: Selective Differential Privacy for Large Language
Models [69.66654761324702]
We propose a simple yet effective just-fine-tune-twice privacy mechanism to achieve SDP for large Transformer-based language models.
Experiments show that our models achieve strong performance while staying robust to the canary insertion attack.
arXiv Detail & Related papers (2022-04-15T22:36:55Z) - Defending against Reconstruction Attacks with R\'enyi Differential
Privacy [72.1188520352079]
Reconstruction attacks allow an adversary to regenerate data samples of the training set using access to only a trained model.
Differential privacy is a known solution to such attacks, but is often used with a relatively large privacy budget.
We show that, for a same mechanism, we can derive privacy guarantees for reconstruction attacks that are better than the traditional ones from the literature.
arXiv Detail & Related papers (2022-02-15T18:09:30Z) - Compression Boosts Differentially Private Federated Learning [0.7742297876120562]
Federated learning allows distributed entities to train a common model collaboratively without sharing their own data.
It remains vulnerable to various inference and reconstruction attacks where a malicious entity can learn private information about the participants' training data from the captured gradients.
We show experimentally, using 2 datasets, that our privacy-preserving proposal can reduce the communication costs by up to 95% with only a negligible performance penalty compared to traditional non-private federated learning schemes.
arXiv Detail & Related papers (2020-11-10T13:11:03Z) - Federated Learning in Adversarial Settings [0.8701566919381224]
Federated learning scheme provides different trade-offs between robustness, privacy, bandwidth efficiency, and model accuracy.
We show that this extension performs as efficiently as the non-private but robust scheme, even with stringent privacy requirements.
This suggests a possible fundamental trade-off between Differential Privacy and robustness.
arXiv Detail & Related papers (2020-10-15T14:57:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.