Mitigating Query-Flooding Parameter Duplication Attack on Regression
Models with High-Dimensional Gaussian Mechanism
- URL: http://arxiv.org/abs/2002.02061v3
- Date: Sun, 7 Jun 2020 01:40:09 GMT
- Title: Mitigating Query-Flooding Parameter Duplication Attack on Regression
Models with High-Dimensional Gaussian Mechanism
- Authors: Xiaoguang Li, Hui Li, Haonan Yan, Zelei Cheng, Wenhai Sun, Hui Zhu
- Abstract summary: Differential privacy (DP) has been considered a promising technique to mitigate this attack.
We show that the adversary can launch a query-flooding parameter duplication (QPD) attack to infer the model information.
We propose a novel High-Dimensional Gaussian (HDG) mechanism to prevent unauthorized information disclosure.
- Score: 12.017509695576377
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Public intelligent services enabled by machine learning algorithms are
vulnerable to model extraction attacks that can steal confidential information
of the learning models through public queries. Differential privacy (DP) has
been considered a promising technique to mitigate this attack. However, we find
that the vulnerability persists when regression models are being protected by
current DP solutions. We show that the adversary can launch a query-flooding
parameter duplication (QPD) attack to infer the model information by repeated
queries.
To defend against the QPD attack on logistic and linear regression models, we
propose a novel High-Dimensional Gaussian (HDG) mechanism to prevent
unauthorized information disclosure without interrupting the intended services.
In contrast to prior work, the proposed HDG mechanism will dynamically generate
the privacy budget and random noise for different queries and their results to
enhance the obfuscation. Besides, for the first time, HDG enables an optimal
privacy budget allocation that automatically determines the minimum amount of
noise to be added per user-desired privacy level on each dimension. We
comprehensively evaluate the performance of HDG using real-world datasets and
shows that HDG effectively mitigates the QPD attack while satisfying the
privacy requirements. We also prepare to open-source the relevant codes to the
community for further research.
Related papers
- Pseudo-Probability Unlearning: Towards Efficient and Privacy-Preserving Machine Unlearning [59.29849532966454]
We propose PseudoProbability Unlearning (PPU), a novel method that enables models to forget data to adhere to privacy-preserving manner.
Our method achieves over 20% improvements in forgetting error compared to the state-of-the-art.
arXiv Detail & Related papers (2024-11-04T21:27:06Z) - PriRoAgg: Achieving Robust Model Aggregation with Minimum Privacy Leakage for Federated Learning [49.916365792036636]
Federated learning (FL) has recently gained significant momentum due to its potential to leverage large-scale distributed user data.
The transmitted model updates can potentially leak sensitive user information, and the lack of central control of the local training process leaves the global model susceptible to malicious manipulations on model updates.
We develop a general framework PriRoAgg, utilizing Lagrange coded computing and distributed zero-knowledge proof, to execute a wide range of robust aggregation algorithms while satisfying aggregated privacy.
arXiv Detail & Related papers (2024-07-12T03:18:08Z) - Privacy Backdoors: Enhancing Membership Inference through Poisoning Pre-trained Models [112.48136829374741]
In this paper, we unveil a new vulnerability: the privacy backdoor attack.
When a victim fine-tunes a backdoored model, their training data will be leaked at a significantly higher rate than if they had fine-tuned a typical model.
Our findings highlight a critical privacy concern within the machine learning community and call for a reevaluation of safety protocols in the use of open-source pre-trained models.
arXiv Detail & Related papers (2024-04-01T16:50:54Z) - Reconciling AI Performance and Data Reconstruction Resilience for
Medical Imaging [52.578054703818125]
Artificial Intelligence (AI) models are vulnerable to information leakage of their training data, which can be highly sensitive.
Differential Privacy (DP) aims to circumvent these susceptibilities by setting a quantifiable privacy budget.
We show that using very large privacy budgets can render reconstruction attacks impossible, while drops in performance are negligible.
arXiv Detail & Related papers (2023-12-05T12:21:30Z) - TeD-SPAD: Temporal Distinctiveness for Self-supervised
Privacy-preservation for video Anomaly Detection [59.04634695294402]
Video anomaly detection (VAD) without human monitoring is a complex computer vision task.
Privacy leakage in VAD allows models to pick up and amplify unnecessary biases related to people's personal information.
We propose TeD-SPAD, a privacy-aware video anomaly detection framework that destroys visual private information in a self-supervised manner.
arXiv Detail & Related papers (2023-08-21T22:42:55Z) - A Linear Reconstruction Approach for Attribute Inference Attacks against Synthetic Data [1.5293427903448022]
We introduce a new attribute inference attack against synthetic data.
We show that our attack can be highly accurate even on arbitrary records.
We then evaluate the tradeoff between protecting privacy and preserving statistical utility.
arXiv Detail & Related papers (2023-01-24T14:56:36Z) - Additive Logistic Mechanism for Privacy-Preserving Self-Supervised
Learning [26.783944764936994]
We study the privacy risks that are associated with training a neural network's weights with self-supervised learning algorithms.
We design a post-training privacy-protection algorithm that adds noise to the fine-tuned weights.
We show that the proposed protection algorithm can effectively reduce the attack accuracy to roughly 50%-equivalent to random guessing.
arXiv Detail & Related papers (2022-05-25T01:33:52Z) - Just Fine-tune Twice: Selective Differential Privacy for Large Language
Models [69.66654761324702]
We propose a simple yet effective just-fine-tune-twice privacy mechanism to achieve SDP for large Transformer-based language models.
Experiments show that our models achieve strong performance while staying robust to the canary insertion attack.
arXiv Detail & Related papers (2022-04-15T22:36:55Z) - Monitoring-based Differential Privacy Mechanism Against Query-Flooding
Parameter Duplication Attack [15.977216274894912]
We propose an adaptive query-flooding parameter duplication (QPD) attack.
The adversary can infer the model information with black-box access.
We develop a defense strategy using monitoring-based DP (MDP) against this new attack.
arXiv Detail & Related papers (2020-11-01T04:21:48Z) - Rethinking Privacy Preserving Deep Learning: How to Evaluate and Thwart
Privacy Attacks [31.34410250008759]
This paper measures the trade-off between model accuracy and privacy losses incurred by reconstruction, tracing and membership attacks.
Experiments show that model accuracies are improved on average by 5-20% compared with baseline mechanisms.
arXiv Detail & Related papers (2020-06-20T15:48:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.