SPIN: Simulated Poisoning and Inversion Network for Federated
Learning-Based 6G Vehicular Networks
- URL: http://arxiv.org/abs/2211.11321v1
- Date: Mon, 21 Nov 2022 10:07:13 GMT
- Title: SPIN: Simulated Poisoning and Inversion Network for Federated
Learning-Based 6G Vehicular Networks
- Authors: Sunder Ali Khowaja, Parus Khuwaja, Kapal Dev, Angelos Antonopoulos
- Abstract summary: Vehicular networks have always faced data privacy preservation concerns.
The technique is quite vulnerable to model inversion and model poisoning attacks.
We propose simulated poisoning and inversion network (SPIN) that leverages the optimization approach for reconstructing data.
- Score: 9.494669823390648
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The applications concerning vehicular networks benefit from the vision of
beyond 5G and 6G technologies such as ultra-dense network topologies, low
latency, and high data rates. Vehicular networks have always faced data privacy
preservation concerns, which lead to the advent of distributed learning
techniques such as federated learning. Although federated learning has solved
data privacy preservation issues to some extent, the technique is quite
vulnerable to model inversion and model poisoning attacks. We assume that the
design of defense mechanism and attacks are two sides of the same coin.
Designing a method to reduce vulnerability requires the attack to be effective
and challenging with real-world implications. In this work, we propose
simulated poisoning and inversion network (SPIN) that leverages the
optimization approach for reconstructing data from a differential model trained
by a vehicular node and intercepted when transmitted to roadside unit (RSU). We
then train a generative adversarial network (GAN) to improve the generation of
data with each passing round and global update from the RSU, accordingly.
Evaluation results show the qualitative and quantitative effectiveness of the
proposed approach. The attack initiated by SPIN can reduce up to 22% accuracy
on publicly available datasets while just using a single attacker. We assume
that revealing the simulation of such attacks would help us find its defense
mechanism in an effective manner.
Related papers
- Data and Model Poisoning Backdoor Attacks on Wireless Federated
Learning, and the Defense Mechanisms: A Comprehensive Survey [28.88186038735176]
Federated Learning (FL) has been increasingly considered for applications to wireless communication networks (WCNs)
In general, non-independent and identically distributed (non-IID) data of WCNs raises concerns about robustness.
This survey provides a comprehensive review of the latest backdoor attacks and defense mechanisms.
arXiv Detail & Related papers (2023-12-14T05:52:29Z) - Genetic Algorithm-Based Dynamic Backdoor Attack on Federated
Learning-Based Network Traffic Classification [1.1887808102491482]
We propose GABAttack, a novel genetic algorithm-based backdoor attack against federated learning for network traffic classification.
This research serves as an alarming call for network security experts and practitioners to develop robust defense measures against such attacks.
arXiv Detail & Related papers (2023-09-27T14:02:02Z) - Avoid Adversarial Adaption in Federated Learning by Multi-Metric
Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv Detail & Related papers (2023-06-06T11:44:42Z) - Adversarial training with informed data selection [53.19381941131439]
Adrial training is the most efficient solution to defend the network against these malicious attacks.
This work proposes a data selection strategy to be applied in the mini-batch training.
The simulation results show that a good compromise can be obtained regarding robustness and standard accuracy.
arXiv Detail & Related papers (2023-01-07T12:09:50Z) - RelaxLoss: Defending Membership Inference Attacks without Losing Utility [68.48117818874155]
We propose a novel training framework based on a relaxed loss with a more achievable learning target.
RelaxLoss is applicable to any classification model with added benefits of easy implementation and negligible overhead.
Our approach consistently outperforms state-of-the-art defense mechanisms in terms of resilience against MIAs.
arXiv Detail & Related papers (2022-07-12T19:34:47Z) - Get your Foes Fooled: Proximal Gradient Split Learning for Defense
against Model Inversion Attacks on IoMT data [5.582293277542012]
In this work, we propose proximal gradient split learning (PSGL) method for defense against the model inversion attacks.
We propose the use of proximal gradient method to recover gradient maps and a decision-level fusion strategy to improve the recognition performance.
arXiv Detail & Related papers (2022-01-12T17:01:19Z) - Learning to Learn Transferable Attack [77.67399621530052]
Transfer adversarial attack is a non-trivial black-box adversarial attack that aims to craft adversarial perturbations on the surrogate model and then apply such perturbations to the victim model.
We propose a Learning to Learn Transferable Attack (LLTA) method, which makes the adversarial perturbations more generalized via learning from both data and model augmentation.
Empirical results on the widely-used dataset demonstrate the effectiveness of our attack method with a 12.85% higher success rate of transfer attack compared with the state-of-the-art methods.
arXiv Detail & Related papers (2021-12-10T07:24:21Z) - The Feasibility and Inevitability of Stealth Attacks [63.14766152741211]
We study new adversarial perturbations that enable an attacker to gain control over decisions in generic Artificial Intelligence systems.
In contrast to adversarial data modification, the attack mechanism we consider here involves alterations to the AI system itself.
arXiv Detail & Related papers (2021-06-26T10:50:07Z) - GRNN: Generative Regression Neural Network -- A Data Leakage Attack for
Federated Learning [3.050919759387984]
We show that image-based privacy data can be easily recovered in full from the shared gradient only via our proposed Generative Regression Neural Network (GRNN)
We evaluate our method on several image classification tasks. The results illustrate that our proposed GRNN outperforms state-of-the-art methods with better stability, stronger, and higher accuracy.
arXiv Detail & Related papers (2021-05-02T18:39:37Z) - Curse or Redemption? How Data Heterogeneity Affects the Robustness of
Federated Learning [51.15273664903583]
Data heterogeneity has been identified as one of the key features in federated learning but often overlooked in the lens of robustness to adversarial attacks.
This paper focuses on characterizing and understanding its impact on backdooring attacks in federated learning through comprehensive experiments using synthetic and the LEAF benchmarks.
arXiv Detail & Related papers (2021-02-01T06:06:21Z) - Mitigating the Impact of Adversarial Attacks in Very Deep Networks [10.555822166916705]
Deep Neural Network (DNN) models have vulnerabilities related to security concerns.
Data poisoning-enabled perturbation attacks are complex adversarial ones that inject false data into models.
We propose an attack-agnostic-based defense method for mitigating their influence.
arXiv Detail & Related papers (2020-12-08T21:25:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.