Jamming Attacks on Decentralized Federated Learning in General Multi-Hop
Wireless Networks
- URL: http://arxiv.org/abs/2301.05250v1
- Date: Thu, 12 Jan 2023 19:03:05 GMT
- Title: Jamming Attacks on Decentralized Federated Learning in General Multi-Hop
Wireless Networks
- Authors: Yi Shi, Yalin E. Sagduyu, Tugba Erpek
- Abstract summary: We consider an effective attack that uses jammers to prevent the model exchanges between nodes.
We show that the DFL performance can be significantly reduced by jamming attacks launched in a wireless network.
- Score: 3.509171590450989
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Decentralized federated learning (DFL) is an effective approach to train a
deep learning model at multiple nodes over a multi-hop network, without the
need of a server having direct connections to all nodes. In general, as long as
nodes are connected potentially via multiple hops, the DFL process will
eventually allow each node to experience the effects of models from all other
nodes via either direct connections or multi-hop paths, and thus is able to
train a high-fidelity model at each node. We consider an effective attack that
uses jammers to prevent the model exchanges between nodes. There are two attack
scenarios. First, the adversary can attack any link under a certain budget.
Once attacked, two end nodes of a link cannot exchange their models. Secondly,
some jammers with limited jamming ranges are deployed in the network and a
jammer can only jam nodes within its jamming range. Once a directional link is
attacked, the receiver node cannot receive the model from the transmitter node.
We design algorithms to select links to be attacked for both scenarios. For the
second scenario, we also design algorithms to deploy jammers at optimal
locations so that they can attack critical nodes and achieve the highest impact
on the DFL process. We evaluate these algorithms by using wireless signal
classification over a large network area as the use case and identify how these
attack mechanisms exploits various learning, connectivity, and sensing aspects.
We show that the DFL performance can be significantly reduced by jamming
attacks launched in a wireless network and characterize the attack surface as a
vulnerability study before the safe deployment of DFL over wireless networks.
Related papers
- Link Stealing Attacks Against Inductive Graph Neural Networks [60.931106032824275]
A graph neural network (GNN) is a type of neural network that is specifically designed to process graph-structured data.
Previous work has shown that transductive GNNs are vulnerable to a series of privacy attacks.
This paper conducts a comprehensive privacy analysis of inductive GNNs through the lens of link stealing attacks.
arXiv Detail & Related papers (2024-05-09T14:03:52Z) - Minimum Topology Attacks for Graph Neural Networks [70.17791814425148]
Graph Neural Networks (GNNs) have received significant attention for their robustness to adversarial topology attacks.
We propose a new type of topology attack, named minimum-budget topology attack, aiming to adaptively find the minimum perturbation sufficient for a successful attack on each node.
arXiv Detail & Related papers (2024-03-05T07:29:12Z) - Reinforcement Learning for Node Selection in Branch-and-Bound [52.2648997215667]
Current state-of-the-art selectors utilize either hand-crafted ensembles that automatically switch between naive sub-node selectors, or learned node selectors that rely on individual node data.
We propose a novel simulation technique that uses reinforcement learning (RL) while considering the entire tree state, rather than just isolated nodes.
arXiv Detail & Related papers (2023-09-29T19:55:56Z) - Communication-Efficient Decentralized Federated Learning via One-Bit
Compressive Sensing [52.402550431781805]
Decentralized federated learning (DFL) has gained popularity due to its practicality across various applications.
Compared to the centralized version, training a shared model among a large number of nodes in DFL is more challenging.
We develop a novel algorithm based on the framework of the inexact alternating direction method (iADM)
arXiv Detail & Related papers (2023-08-31T12:22:40Z) - Secure Deep Learning-based Distributed Intelligence on Pocket-sized
Drones [75.80952211739185]
Palm-sized nano-drones are an appealing class of edge nodes, but their limited computational resources prevent running large deep-learning models onboard.
Adopting an edge-fog computational paradigm, we can offload part of the computation to the fog; however, this poses security concerns if the fog node, or the communication link, can not be trusted.
We propose a novel distributed edge-fog execution scheme that validates fog computation by redundantly executing a random subnetwork aboard our nano-drone.
arXiv Detail & Related papers (2023-07-04T08:29:41Z) - Sparse Vicious Attacks on Graph Neural Networks [3.246307337376473]
This work focuses on a specific, white-box attack to GNN-based link prediction models.
We propose SAVAGE, a novel framework and a method to mount this type of link prediction attacks.
Experiments conducted on real-world and synthetic datasets demonstrate that adversarial attacks implemented through SAVAGE indeed achieve high attack success rate.
arXiv Detail & Related papers (2022-09-20T12:51:24Z) - Cascading Failures in Smart Grids under Random, Targeted and Adaptive
Attacks [4.968545158985657]
We study cascading failures in smart grids, where an attacker selectively compromises the nodes with probabilities proportional to their degrees, betweenness, or clustering coefficient.
We show that networks disintegrate faster for targeted attacks compared to random attacks.
An adversary has an advantage in this adaptive approach, compared to compromising the same number of nodes all at once.
arXiv Detail & Related papers (2022-06-25T21:38:31Z) - Graph-Fraudster: Adversarial Attacks on Graph Neural Network Based
Vertical Federated Learning [2.23816711660697]
vertical federated learning (VFL) is proposed to implement local data protection through training a global model.
For graph-structured data, it is natural idea to construct VFL framework with GNN models.
GNN models are proven to be vulnerable to adversarial attacks.
This paper reveals that GVFL is vulnerable to adversarial attack similar to centralized GNN models.
arXiv Detail & Related papers (2021-10-13T03:06:02Z) - Single Node Injection Attack against Graph Neural Networks [39.455430635159146]
This paper focuses on an extremely limited scenario of single node injection evasion attack on Graph Neural Networks (GNNs)
Experimental results show that 100%, 98.60%, and 94.98% nodes on three public datasets are successfully attacked even when only injecting one node with one edge.
We propose a Generalizable Node Injection Attack model, namely G-NIA, to improve the attack efficiency while ensuring the attack performance.
arXiv Detail & Related papers (2021-08-30T08:12:25Z) - Adversarial Attacks on Deep Learning Based Power Allocation in a Massive
MIMO Network [62.77129284830945]
We show that adversarial attacks can break DL-based power allocation in the downlink of a massive multiple-input-multiple-output (maMIMO) network.
We benchmark the performance of these attacks and show that with a small perturbation in the input of the neural network (NN), the white-box attacks can result in infeasible solutions up to 86%.
arXiv Detail & Related papers (2021-01-28T16:18:19Z) - Single-Node Attack for Fooling Graph Neural Networks [5.7923858184309385]
Graph neural networks (GNNs) have shown broad applicability in a variety of domains.
Some of these domains, such as social networks and product recommendations, are fertile ground for malicious users and behavior.
In this paper, we show that GNNs are vulnerable to the extremely limited scenario of a single-node adversarial example.
arXiv Detail & Related papers (2020-11-06T19:59:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.