Monitoring-based Differential Privacy Mechanism Against Query-Flooding
Parameter Duplication Attack
- URL: http://arxiv.org/abs/2011.00418v1
- Date: Sun, 1 Nov 2020 04:21:48 GMT
- Title: Monitoring-based Differential Privacy Mechanism Against Query-Flooding
Parameter Duplication Attack
- Authors: Haonan Yan, Xiaoguang Li, Hui Li, Jiamin Li, Wenhai Sun and Fenghua Li
- Abstract summary: We propose an adaptive query-flooding parameter duplication (QPD) attack.
The adversary can infer the model information with black-box access.
We develop a defense strategy using monitoring-based DP (MDP) against this new attack.
- Score: 15.977216274894912
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Public intelligent services enabled by machine learning algorithms are
vulnerable to model extraction attacks that can steal confidential information
of the learning models through public queries. Though there are some protection
options such as differential privacy (DP) and monitoring, which are considered
promising techniques to mitigate this attack, we still find that the
vulnerability persists. In this paper, we propose an adaptive query-flooding
parameter duplication (QPD) attack. The adversary can infer the model
information with black-box access and no prior knowledge of any model
parameters or training data via QPD. We also develop a defense strategy using
DP called monitoring-based DP (MDP) against this new attack. In MDP, we first
propose a novel real-time model extraction status assessment scheme called
Monitor to evaluate the situation of the model. Then, we design a method to
guide the differential privacy budget allocation called APBA adaptively.
Finally, all DP-based defenses with MDP could dynamically adjust the amount of
noise added in the model response according to the result from Monitor and
effectively defends the QPD attack. Furthermore, we thoroughly evaluate and
compare the QPD attack and MDP defense performance on real-world models with DP
and monitoring protection.
Related papers
- IBD-PSC: Input-level Backdoor Detection via Parameter-oriented Scaling Consistency [20.61046457594186]
Deep neural networks (DNNs) are vulnerable to backdoor attacks.
This paper proposes a simple yet effective input-level backdoor detection (dubbed IBD-PSC) to filter out malicious testing images.
arXiv Detail & Related papers (2024-05-16T03:19:52Z) - Privacy Backdoors: Enhancing Membership Inference through Poisoning Pre-trained Models [112.48136829374741]
In this paper, we unveil a new vulnerability: the privacy backdoor attack.
When a victim fine-tunes a backdoored model, their training data will be leaked at a significantly higher rate than if they had fine-tuned a typical model.
Our findings highlight a critical privacy concern within the machine learning community and call for a reevaluation of safety protocols in the use of open-source pre-trained models.
arXiv Detail & Related papers (2024-04-01T16:50:54Z) - MisGUIDE : Defense Against Data-Free Deep Learning Model Extraction [0.8437187555622164]
"MisGUIDE" is a two-step defense framework for Deep Learning models that disrupts the adversarial sample generation process.
The aim of the proposed defense method is to reduce the accuracy of the cloned model while maintaining accuracy on authentic queries.
arXiv Detail & Related papers (2024-03-27T13:59:21Z) - Does Differential Privacy Prevent Backdoor Attacks in Practice? [8.951356689083166]
We investigate the effectiveness of Differential Privacy techniques in preventing backdoor attacks in machine learning models.
We propose Label-DP as a faster and more accurate alternative to DP-SGD and PATE.
arXiv Detail & Related papers (2023-11-10T18:32:08Z) - Setting the Trap: Capturing and Defeating Backdoors in Pretrained
Language Models through Honeypots [68.84056762301329]
Recent research has exposed the susceptibility of pretrained language models (PLMs) to backdoor attacks.
We propose and integrate a honeypot module into the original PLM to absorb backdoor information exclusively.
Our design is motivated by the observation that lower-layer representations in PLMs carry sufficient backdoor features.
arXiv Detail & Related papers (2023-10-28T08:21:16Z) - Defending Pre-trained Language Models as Few-shot Learners against
Backdoor Attacks [72.03945355787776]
We advocate MDP, a lightweight, pluggable, and effective defense for PLMs as few-shot learners.
We show analytically that MDP creates an interesting dilemma for the attacker to choose between attack effectiveness and detection evasiveness.
arXiv Detail & Related papers (2023-09-23T04:41:55Z) - Avoid Adversarial Adaption in Federated Learning by Multi-Metric
Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv Detail & Related papers (2023-06-06T11:44:42Z) - Predictable MDP Abstraction for Unsupervised Model-Based RL [93.91375268580806]
We propose predictable MDP abstraction (PMA)
Instead of training a predictive model on the original MDP, we train a model on a transformed MDP with a learned action space.
We theoretically analyze PMA and empirically demonstrate that PMA leads to significant improvements over prior unsupervised model-based RL approaches.
arXiv Detail & Related papers (2023-02-08T07:37:51Z) - Federated Test-Time Adaptive Face Presentation Attack Detection with
Dual-Phase Privacy Preservation [100.69458267888962]
Face presentation attack detection (fPAD) plays a critical role in the modern face recognition pipeline.
Due to legal and privacy issues, training data (real face images and spoof images) are not allowed to be directly shared between different data sources.
We propose a Federated Test-Time Adaptive Face Presentation Attack Detection with Dual-Phase Privacy Preservation framework.
arXiv Detail & Related papers (2021-10-25T02:51:05Z) - Improving Robustness to Model Inversion Attacks via Mutual Information
Regularization [12.079281416410227]
This paper studies defense mechanisms against model inversion (MI) attacks.
MI is a type of privacy attacks aimed at inferring information about the training data distribution given the access to a target machine learning model.
We propose the Mutual Information Regularization based Defense (MID) against MI attacks.
arXiv Detail & Related papers (2020-09-11T06:02:44Z) - Mitigating Query-Flooding Parameter Duplication Attack on Regression
Models with High-Dimensional Gaussian Mechanism [12.017509695576377]
Differential privacy (DP) has been considered a promising technique to mitigate this attack.
We show that the adversary can launch a query-flooding parameter duplication (QPD) attack to infer the model information.
We propose a novel High-Dimensional Gaussian (HDG) mechanism to prevent unauthorized information disclosure.
arXiv Detail & Related papers (2020-02-06T01:47:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.