One Parameter Defense -- Defending against Data Inference Attacks via
Differential Privacy
- URL: http://arxiv.org/abs/2203.06580v1
- Date: Sun, 13 Mar 2022 06:06:24 GMT
- Title: One Parameter Defense -- Defending against Data Inference Attacks via
Differential Privacy
- Authors: Dayong Ye and Sheng Shen and Tianqing Zhu and Bo Liu and Wanlei Zhou
- Abstract summary: Machine learning models are vulnerable to data inference attacks, such as membership inference and model inversion attacks.
Most existing defense methods only protect against membership inference attacks.
We propose a differentially private defense method that handles both types of attacks in a time-efficient manner.
- Score: 26.000487178636927
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning models are vulnerable to data inference attacks, such as
membership inference and model inversion attacks. In these types of breaches,
an adversary attempts to infer a data record's membership in a dataset or even
reconstruct this data record using a confidence score vector predicted by the
target model. However, most existing defense methods only protect against
membership inference attacks. Methods that can combat both types of attacks
require a new model to be trained, which may not be time-efficient. In this
paper, we propose a differentially private defense method that handles both
types of attacks in a time-efficient manner by tuning only one parameter, the
privacy budget. The central idea is to modify and normalize the confidence
score vectors with a differential privacy mechanism which preserves privacy and
obscures membership and reconstructed data. Moreover, this method can guarantee
the order of scores in the vector to avoid any loss in classification accuracy.
The experimental results show the method to be an effective and timely defense
against both membership inference and model inversion attacks with no reduction
in accuracy.
Related papers
- Pseudo-Probability Unlearning: Towards Efficient and Privacy-Preserving Machine Unlearning [59.29849532966454]
We propose PseudoProbability Unlearning (PPU), a novel method that enables models to forget data to adhere to privacy-preserving manner.
Our method achieves over 20% improvements in forgetting error compared to the state-of-the-art.
arXiv Detail & Related papers (2024-11-04T21:27:06Z) - Robust Federated Learning Mitigates Client-side Training Data Distribution Inference Attacks [48.70867241987739]
InferGuard is a novel Byzantine-robust aggregation rule aimed at defending against client-side training data distribution inference attacks.
The results of our experiments indicate that our defense mechanism is highly effective in protecting against client-side training data distribution inference attacks.
arXiv Detail & Related papers (2024-03-05T17:41:35Z) - Confidence Is All You Need for MI Attacks [7.743155804758186]
We propose a new method to gauge a data point's membership in a model's training set.
During training, the model is essentially being 'fit' to the training data and might face particular difficulties in generalization to unseen data.
arXiv Detail & Related papers (2023-11-26T18:09:24Z) - Avoid Adversarial Adaption in Federated Learning by Multi-Metric
Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv Detail & Related papers (2023-06-06T11:44:42Z) - Purifier: Defending Data Inference Attacks via Transforming Confidence
Scores [27.330482508047428]
We propose a method, namely PURIFIER, to defend against membership inference attacks.
Experiments show that PURIFIER helps defend membership inference attacks with high effectiveness and efficiency.
PURIFIER is also effective in defending adversarial model inversion attacks and attribute inference attacks.
arXiv Detail & Related papers (2022-12-01T16:09:50Z) - Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets [53.866927712193416]
We show that an adversary who can poison a training dataset can cause models trained on this dataset to leak private details belonging to other parties.
Our attacks are effective across membership inference, attribute inference, and data extraction.
Our results cast doubts on the relevance of cryptographic privacy guarantees in multiparty protocols for machine learning.
arXiv Detail & Related papers (2022-03-31T18:06:28Z) - LTU Attacker for Membership Inference [23.266710407178078]
We address the problem of defending predictive models against membership inference attacks.
Both utility and privacy are evaluated with an external apparatus including an Attacker and an Evaluator.
We prove that, under certain conditions, even a "na"ive" LTU Attacker can achieve lower bounds on privacy loss with simple attack strategies.
arXiv Detail & Related papers (2022-02-04T18:06:21Z) - Sampling Attacks: Amplification of Membership Inference Attacks by
Repeated Queries [74.59376038272661]
We introduce sampling attack, a novel membership inference technique that unlike other standard membership adversaries is able to work under severe restriction of no access to scores of the victim model.
We show that a victim model that only publishes the labels is still susceptible to sampling attacks and the adversary can recover up to 100% of its performance.
For defense, we choose differential privacy in the form of gradient perturbation during the training of the victim model as well as output perturbation at prediction time.
arXiv Detail & Related papers (2020-09-01T12:54:54Z) - Label-Only Membership Inference Attacks [67.46072950620247]
We introduce label-only membership inference attacks.
Our attacks evaluate the robustness of a model's predicted labels under perturbations.
We find that training models with differential privacy and (strong) L2 regularization are the only known defense strategies.
arXiv Detail & Related papers (2020-07-28T15:44:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.