Defending Model Inversion and Membership Inference Attacks via
Prediction Purification
- URL: http://arxiv.org/abs/2005.03915v2
- Date: Thu, 20 Aug 2020 16:27:41 GMT
- Title: Defending Model Inversion and Membership Inference Attacks via
Prediction Purification
- Authors: Ziqi Yang, Bin Shao, Bohan Xuan, Ee-Chien Chang, Fan Zhang
- Abstract summary: We propose a unified framework, namely purification framework, to defend data inference attacks.
It purifies the confidence score vectors predicted by the target classifier by reducing their dispersion.
It can reduce the membership inference accuracy by up to 15% and increase the model inversion error by a factor of up to 4.
- Score: 19.281532404775756
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural networks are susceptible to data inference attacks such as the model
inversion attack and the membership inference attack, where the attacker could
infer the reconstruction and the membership of a data sample from the
confidence scores predicted by the target classifier. In this paper, we propose
a unified approach, namely purification framework, to defend data inference
attacks. It purifies the confidence score vectors predicted by the target
classifier by reducing their dispersion. The purifier can be further
specialized in defending a particular attack via adversarial learning. We
evaluate our approach on benchmark datasets and classifiers. We show that when
the purifier is dedicated to one attack, it naturally defends the other one,
which empirically demonstrates the connection between the two attacks. The
purifier can effectively defend both attacks. For example, it can reduce the
membership inference accuracy by up to 15% and increase the model inversion
error by a factor of up to 4. Besides, it incurs less than 0.4% classification
accuracy drop and less than 5.5% distortion to the confidence scores.
Related papers
- Robust Federated Learning Mitigates Client-side Training Data Distribution Inference Attacks [48.70867241987739]
InferGuard is a novel Byzantine-robust aggregation rule aimed at defending against client-side training data distribution inference attacks.
The results of our experiments indicate that our defense mechanism is highly effective in protecting against client-side training data distribution inference attacks.
arXiv Detail & Related papers (2024-03-05T17:41:35Z) - Adversarial Attacks Neutralization via Data Set Randomization [3.655021726150369]
Adversarial attacks on deep learning models pose a serious threat to their reliability and security.
We propose a new defense mechanism that is rooted on hyperspace projection.
We show that our solution increases the robustness of deep learning models against adversarial attacks.
arXiv Detail & Related papers (2023-06-21T10:17:55Z) - Purifier: Defending Data Inference Attacks via Transforming Confidence
Scores [27.330482508047428]
We propose a method, namely PURIFIER, to defend against membership inference attacks.
Experiments show that PURIFIER helps defend membership inference attacks with high effectiveness and efficiency.
PURIFIER is also effective in defending adversarial model inversion attacks and attribute inference attacks.
arXiv Detail & Related papers (2022-12-01T16:09:50Z) - Defending against the Label-flipping Attack in Federated Learning [5.769445676575767]
Federated learning (FL) provides autonomy and privacy by design to participating peers.
The label-flipping (LF) attack is a targeted poisoning attack where the attackers poison their training data by flipping the labels of some examples.
We propose a novel defense that first dynamically extracts those gradients from the peers' local updates.
arXiv Detail & Related papers (2022-07-05T12:02:54Z) - One Parameter Defense -- Defending against Data Inference Attacks via
Differential Privacy [26.000487178636927]
Machine learning models are vulnerable to data inference attacks, such as membership inference and model inversion attacks.
Most existing defense methods only protect against membership inference attacks.
We propose a differentially private defense method that handles both types of attacks in a time-efficient manner.
arXiv Detail & Related papers (2022-03-13T06:06:24Z) - Membership Inference Attacks From First Principles [24.10746844866869]
A membership inference attack allows an adversary to query a trained machine learning model to predict whether or not a particular example was contained in the model's training dataset.
These attacks are currently evaluated using average-case "accuracy" metrics that fail to characterize whether the attack can confidently identify any members of the training set.
We argue that attacks should instead be evaluated by computing their true-positive rate at low false-positive rates, and find most prior attacks perform poorly when evaluated in this way.
Our attack is 10x more powerful at low false-positive rates, and also strictly dominates prior attacks on existing metrics.
arXiv Detail & Related papers (2021-12-07T08:47:00Z) - Towards A Conceptually Simple Defensive Approach for Few-shot
classifiers Against Adversarial Support Samples [107.38834819682315]
We study a conceptually simple approach to defend few-shot classifiers against adversarial attacks.
We propose a simple attack-agnostic detection method, using the concept of self-similarity and filtering.
Our evaluation on the miniImagenet (MI) and CUB datasets exhibit good attack detection performance.
arXiv Detail & Related papers (2021-10-24T05:46:03Z) - Curse or Redemption? How Data Heterogeneity Affects the Robustness of
Federated Learning [51.15273664903583]
Data heterogeneity has been identified as one of the key features in federated learning but often overlooked in the lens of robustness to adversarial attacks.
This paper focuses on characterizing and understanding its impact on backdooring attacks in federated learning through comprehensive experiments using synthetic and the LEAF benchmarks.
arXiv Detail & Related papers (2021-02-01T06:06:21Z) - How Robust are Randomized Smoothing based Defenses to Data Poisoning? [66.80663779176979]
We present a previously unrecognized threat to robust machine learning models that highlights the importance of training-data quality.
We propose a novel bilevel optimization-based data poisoning attack that degrades the robustness guarantees of certifiably robust classifiers.
Our attack is effective even when the victim trains the models from scratch using state-of-the-art robust training methods.
arXiv Detail & Related papers (2020-12-02T15:30:21Z) - Sampling Attacks: Amplification of Membership Inference Attacks by
Repeated Queries [74.59376038272661]
We introduce sampling attack, a novel membership inference technique that unlike other standard membership adversaries is able to work under severe restriction of no access to scores of the victim model.
We show that a victim model that only publishes the labels is still susceptible to sampling attacks and the adversary can recover up to 100% of its performance.
For defense, we choose differential privacy in the form of gradient perturbation during the training of the victim model as well as output perturbation at prediction time.
arXiv Detail & Related papers (2020-09-01T12:54:54Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.