OASIS: Offsetting Active Reconstruction Attacks in Federated Learning
- URL: http://arxiv.org/abs/2311.13739v2
- Date: Mon, 3 Jun 2024 23:02:08 GMT
- Title: OASIS: Offsetting Active Reconstruction Attacks in Federated Learning
- Authors: Tre' R. Jeter, Truc Nguyen, Raed Alharbi, My T. Thai,
- Abstract summary: Federated Learning (FL) has garnered significant attention for its potential to protect user privacy.
Recent research has demonstrated that FL protocols can be easily compromised by active reconstruction attacks.
We propose a defense mechanism based on image augmentation that effectively counteracts active reconstruction attacks.
- Score: 14.644814818768172
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL) has garnered significant attention for its potential to protect user privacy while enhancing model training efficiency. For that reason, FL has found its use in various domains, from healthcare to industrial engineering, especially where data cannot be easily exchanged due to sensitive information or privacy laws. However, recent research has demonstrated that FL protocols can be easily compromised by active reconstruction attacks executed by dishonest servers. These attacks involve the malicious modification of global model parameters, allowing the server to obtain a verbatim copy of users' private data by inverting their gradient updates. Tackling this class of attack remains a crucial challenge due to the strong threat model. In this paper, we propose a defense mechanism, namely OASIS, based on image augmentation that effectively counteracts active reconstruction attacks while preserving model performance. We first uncover the core principle of gradient inversion that enables these attacks and theoretically identify the main conditions by which the defense can be robust regardless of the attack strategies. We then construct our defense with image augmentation showing that it can undermine the attack principle. Comprehensive evaluations demonstrate the efficacy of the defense mechanism highlighting its feasibility as a solution.
Related papers
- A Practical Trigger-Free Backdoor Attack on Neural Networks [33.426207982772226]
We propose a trigger-free backdoor attack that does not require access to any training data.
Specifically, we design a novel fine-tuning approach that incorporates the concept of malicious data into the concept of the attacker-specified class.
The effectiveness, practicality, and stealthiness of the proposed attack are evaluated on three real-world datasets.
arXiv Detail & Related papers (2024-08-21T08:53:36Z) - Improving the Robustness of Object Detection and Classification AI models against Adversarial Patch Attacks [2.963101656293054]
We analyze attack techniques and propose a robust defense approach.
We successfully reduce model confidence by over 20% using adversarial patch attacks that exploit object shape, texture and position.
Our inpainting defense approach significantly enhances model resilience, achieving high accuracy and reliable localization despite the adversarial attacks.
arXiv Detail & Related papers (2024-03-04T13:32:48Z) - BadCLIP: Dual-Embedding Guided Backdoor Attack on Multimodal Contrastive
Learning [85.2564206440109]
This paper reveals the threats in this practical scenario that backdoor attacks can remain effective even after defenses.
We introduce the emphtoolns attack, which is resistant to backdoor detection and model fine-tuning defenses.
arXiv Detail & Related papers (2023-11-20T02:21:49Z) - FedDefender: Client-Side Attack-Tolerant Federated Learning [60.576073964874]
Federated learning enables learning from decentralized data sources without compromising privacy.
It is vulnerable to model poisoning attacks, where malicious clients interfere with the training process.
We propose a new defense mechanism that focuses on the client-side, called FedDefender, to help benign clients train robust local models.
arXiv Detail & Related papers (2023-07-18T08:00:41Z) - Avoid Adversarial Adaption in Federated Learning by Multi-Metric
Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv Detail & Related papers (2023-06-06T11:44:42Z) - Learning to Backdoor Federated Learning [9.046972927978997]
In a federated learning (FL) system, malicious participants can easily embed backdoors into the aggregated model.
We propose a general reinforcement learning-based backdoor attack framework.
Our framework is both adaptive and flexible and achieves strong attack performance and durability even under state-of-the-art defenses.
arXiv Detail & Related papers (2023-03-06T17:47:04Z) - Learning to Invert: Simple Adaptive Attacks for Gradient Inversion in
Federated Learning [31.374376311614675]
Gradient inversion attack enables recovery of training samples from model gradients in federated learning.
We show that existing defenses can be broken by a simple adaptive attack.
arXiv Detail & Related papers (2022-10-19T20:41:30Z) - Evaluating Gradient Inversion Attacks and Defenses in Federated Learning [43.993693910541275]
This paper evaluates existing attacks and defenses against gradient inversion attacks.
We show the trade-offs of privacy leakage and data utility of three proposed defense mechanisms.
Our findings suggest that the state-of-the-art attacks can currently be defended against with minor data utility loss.
arXiv Detail & Related papers (2021-11-30T19:34:16Z) - Fixed Points in Cyber Space: Rethinking Optimal Evasion Attacks in the
Age of AI-NIDS [70.60975663021952]
We study blackbox adversarial attacks on network classifiers.
We argue that attacker-defender fixed points are themselves general-sum games with complex phase transitions.
We show that a continual learning approach is required to study attacker-defender dynamics.
arXiv Detail & Related papers (2021-11-23T23:42:16Z) - Guided Adversarial Attack for Evaluating and Enhancing Adversarial
Defenses [59.58128343334556]
We introduce a relaxation term to the standard loss, that finds more suitable gradient-directions, increases attack efficacy and leads to more efficient adversarial training.
We propose Guided Adversarial Margin Attack (GAMA), which utilizes function mapping of the clean image to guide the generation of adversaries.
We also propose Guided Adversarial Training (GAT), which achieves state-of-the-art performance amongst single-step defenses.
arXiv Detail & Related papers (2020-11-30T16:39:39Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.