Inf2Guard: An Information-Theoretic Framework for Learning
Privacy-Preserving Representations against Inference Attacks
- URL: http://arxiv.org/abs/2403.02116v1
- Date: Mon, 4 Mar 2024 15:20:19 GMT
- Title: Inf2Guard: An Information-Theoretic Framework for Learning
Privacy-Preserving Representations against Inference Attacks
- Authors: Sayedeh Leila Noorbakhsh, Binghui Zhang, Yuan Hong, Binghui Wang
- Abstract summary: We propose an information-theoretic defense framework, called Inf2Guard, against three major types of inference attacks.
Inf2Guard involves two mutual information objectives, for privacy protection and utility preservation.
- Score: 24.971332760137635
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Machine learning (ML) is vulnerable to inference (e.g., membership inference,
property inference, and data reconstruction) attacks that aim to infer the
private information of training data or dataset. Existing defenses are only
designed for one specific type of attack and sacrifice significant utility or
are soon broken by adaptive attacks. We address these limitations by proposing
an information-theoretic defense framework, called Inf2Guard, against the three
major types of inference attacks. Our framework, inspired by the success of
representation learning, posits that learning shared representations not only
saves time/costs but also benefits numerous downstream tasks. Generally,
Inf2Guard involves two mutual information objectives, for privacy protection
and utility preservation, respectively. Inf2Guard exhibits many merits: it
facilitates the design of customized objectives against the specific inference
attack; it provides a general defense framework which can treat certain
existing defenses as special cases; and importantly, it aids in deriving
theoretical results, e.g., inherent utility-privacy tradeoff and guaranteed
privacy leakage. Extensive evaluations validate the effectiveness of Inf2Guard
for learning privacy-preserving representations against inference attacks and
demonstrate the superiority over the baselines.
Related papers
- Learning Robust and Privacy-Preserving Representations via Information Theory [21.83308540799076]
We take the first step to mitigate both the security and privacy attacks, and maintain task utility as well.
We propose an information-theoretic framework to achieve the goals through the lens of representation learning.
arXiv Detail & Related papers (2024-12-15T05:51:48Z) - Effectiveness of L2 Regularization in Privacy-Preserving Machine Learning [1.4638393290666896]
Well-performing models, the industry seeks, usually rely on a large volume of training data.
The use of such data raises serious privacy concerns due to the potential risks of leaks of highly sensitive information.
In this work, we compare the effectiveness of L2 regularization and differential privacy in mitigating Membership Inference Attack risks.
arXiv Detail & Related papers (2024-12-02T14:31:11Z) - Improving Adversarial Robustness via Decoupled Visual Representation Masking [65.73203518658224]
In this paper, we highlight two novel properties of robust features from the feature distribution perspective.
We find that state-of-the-art defense methods aim to address both of these mentioned issues well.
Specifically, we propose a simple but effective defense based on decoupled visual representation masking.
arXiv Detail & Related papers (2024-06-16T13:29:41Z) - On the Difficulty of Defending Contrastive Learning against Backdoor
Attacks [58.824074124014224]
We show how contrastive backdoor attacks operate through distinctive mechanisms.
Our findings highlight the need for defenses tailored to the specificities of contrastive backdoor attacks.
arXiv Detail & Related papers (2023-12-14T15:54:52Z) - Students Parrot Their Teachers: Membership Inference on Model
Distillation [54.392069096234074]
We study the privacy provided by knowledge distillation to both the teacher and student training sets.
Our attacks are strongest when student and teacher sets are similar, or when the attacker can poison the teacher set.
arXiv Detail & Related papers (2023-03-06T19:16:23Z) - Defending against Reconstruction Attacks with R\'enyi Differential
Privacy [72.1188520352079]
Reconstruction attacks allow an adversary to regenerate data samples of the training set using access to only a trained model.
Differential privacy is a known solution to such attacks, but is often used with a relatively large privacy budget.
We show that, for a same mechanism, we can derive privacy guarantees for reconstruction attacks that are better than the traditional ones from the literature.
arXiv Detail & Related papers (2022-02-15T18:09:30Z) - LTU Attacker for Membership Inference [23.266710407178078]
We address the problem of defending predictive models against membership inference attacks.
Both utility and privacy are evaluated with an external apparatus including an Attacker and an Evaluator.
We prove that, under certain conditions, even a "na"ive" LTU Attacker can achieve lower bounds on privacy loss with simple attack strategies.
arXiv Detail & Related papers (2022-02-04T18:06:21Z) - Bounding Training Data Reconstruction in Private (Deep) Learning [40.86813581191581]
Differential privacy is widely accepted as the de facto method for preventing data leakage in ML.
Existing semantic guarantees for DP focus on membership inference.
We show that two distinct privacy accounting methods -- Renyi differential privacy and Fisher information leakage -- both offer strong semantic protection against data reconstruction attacks.
arXiv Detail & Related papers (2022-01-28T19:24:30Z) - Privacy and Robustness in Federated Learning: Attacks and Defenses [74.62641494122988]
We conduct the first comprehensive survey on this topic.
Through a concise introduction to the concept of FL, and a unique taxonomy covering: 1) threat models; 2) poisoning attacks and defenses against robustness; 3) inference attacks and defenses against privacy, we provide an accessible review of this important topic.
arXiv Detail & Related papers (2020-12-07T12:11:45Z) - Sampling Attacks: Amplification of Membership Inference Attacks by
Repeated Queries [74.59376038272661]
We introduce sampling attack, a novel membership inference technique that unlike other standard membership adversaries is able to work under severe restriction of no access to scores of the victim model.
We show that a victim model that only publishes the labels is still susceptible to sampling attacks and the adversary can recover up to 100% of its performance.
For defense, we choose differential privacy in the form of gradient perturbation during the training of the victim model as well as output perturbation at prediction time.
arXiv Detail & Related papers (2020-09-01T12:54:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.