Beyond Gradients: Exploiting Adversarial Priors in Model Inversion
Attacks
- URL: http://arxiv.org/abs/2203.00481v1
- Date: Tue, 1 Mar 2022 14:22:29 GMT
- Title: Beyond Gradients: Exploiting Adversarial Priors in Model Inversion
Attacks
- Authors: Dmitrii Usynin, Daniel Rueckert, Georgios Kaissis
- Abstract summary: Collaborative machine learning settings can be susceptible to adversarial interference and attacks.
One class of such attacks is termed model inversion attacks, characterised by the adversary reverse-engineering the model to extract representations.
We propose a novel model inversion framework that builds on the foundations of gradient-based model inversion attacks.
- Score: 7.49320945341034
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Collaborative machine learning settings like federated learning can be
susceptible to adversarial interference and attacks. One class of such attacks
is termed model inversion attacks, characterised by the adversary
reverse-engineering the model to extract representations and thus disclose the
training data. Prior implementations of this attack typically only rely on the
captured data (i.e. the shared gradients) and do not exploit the data the
adversary themselves control as part of the training consortium. In this work,
we propose a novel model inversion framework that builds on the foundations of
gradient-based model inversion attacks, but additionally relies on matching the
features and the style of the reconstructed image to data that is controlled by
an adversary. Our technique outperforms existing gradient-based approaches both
qualitatively and quantitatively, while still maintaining the same
honest-but-curious threat model, allowing the adversary to obtain enhanced
reconstructions while remaining concealed.
Related papers
- Transferable Adversarial Attacks on SAM and Its Downstream Models [87.23908485521439]
This paper explores the feasibility of adversarial attacking various downstream models fine-tuned from the segment anything model (SAM)
To enhance the effectiveness of the adversarial attack towards models fine-tuned on unknown datasets, we propose a universal meta-initialization (UMI) algorithm.
arXiv Detail & Related papers (2024-10-26T15:04:04Z) - Unlearning Backdoor Threats: Enhancing Backdoor Defense in Multimodal Contrastive Learning via Local Token Unlearning [49.242828934501986]
Multimodal contrastive learning has emerged as a powerful paradigm for building high-quality features.
backdoor attacks subtly embed malicious behaviors within the model during training.
We introduce an innovative token-based localized forgetting training regime.
arXiv Detail & Related papers (2024-03-24T18:33:15Z) - Defense Against Model Extraction Attacks on Recommender Systems [53.127820987326295]
We introduce Gradient-based Ranking Optimization (GRO) to defend against model extraction attacks on recommender systems.
GRO aims to minimize the loss of the protected target model while maximizing the loss of the attacker's surrogate model.
Results show GRO's superior effectiveness in defending against model extraction attacks.
arXiv Detail & Related papers (2023-10-25T03:30:42Z) - OMG-ATTACK: Self-Supervised On-Manifold Generation of Transferable
Evasion Attacks [17.584752814352502]
Evasion Attacks (EA) are used to test the robustness of trained neural networks by distorting input data.
We introduce a self-supervised, computationally economical method for generating adversarial examples.
Our experiments consistently demonstrate the method is effective across various models, unseen data categories, and even defended models.
arXiv Detail & Related papers (2023-10-05T17:34:47Z) - Boosting Model Inversion Attacks with Adversarial Examples [26.904051413441316]
We propose a new training paradigm for a learning-based model inversion attack that can achieve higher attack accuracy in a black-box setting.
First, we regularize the training process of the attack model with an added semantic loss function.
Second, we inject adversarial examples into the training data to increase the diversity of the class-related parts.
arXiv Detail & Related papers (2023-06-24T13:40:58Z) - Dropout is NOT All You Need to Prevent Gradient Leakage [0.6021787236982659]
We analyze the effect of dropout on iterative gradient inversion attacks.
We propose a novel Inversion Attack (DIA) that jointly optimize for client data and dropout masks.
We find that our proposed attack bypasses the protection seemingly induced by dropout and reconstructs client data with high fidelity.
arXiv Detail & Related papers (2022-08-12T08:29:44Z) - Reconstructing Training Data with Informed Adversaries [30.138217209991826]
Given access to a machine learning model, can an adversary reconstruct the model's training data?
This work studies this question from the lens of a powerful informed adversary who knows all the training data points except one.
We show it is feasible to reconstruct the remaining data point in this stringent threat model.
arXiv Detail & Related papers (2022-01-13T09:19:25Z) - Learning to Learn Transferable Attack [77.67399621530052]
Transfer adversarial attack is a non-trivial black-box adversarial attack that aims to craft adversarial perturbations on the surrogate model and then apply such perturbations to the victim model.
We propose a Learning to Learn Transferable Attack (LLTA) method, which makes the adversarial perturbations more generalized via learning from both data and model augmentation.
Empirical results on the widely-used dataset demonstrate the effectiveness of our attack method with a 12.85% higher success rate of transfer attack compared with the state-of-the-art methods.
arXiv Detail & Related papers (2021-12-10T07:24:21Z) - Delving into Data: Effectively Substitute Training for Black-box Attack [84.85798059317963]
We propose a novel perspective substitute training that focuses on designing the distribution of data used in the knowledge stealing process.
The combination of these two modules can further boost the consistency of the substitute model and target model, which greatly improves the effectiveness of adversarial attack.
arXiv Detail & Related papers (2021-04-26T07:26:29Z) - Boosting Black-Box Attack with Partially Transferred Conditional
Adversarial Distribution [83.02632136860976]
We study black-box adversarial attacks against deep neural networks (DNNs)
We develop a novel mechanism of adversarial transferability, which is robust to the surrogate biases.
Experiments on benchmark datasets and attacking against real-world API demonstrate the superior attack performance of the proposed method.
arXiv Detail & Related papers (2020-06-15T16:45:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.