Cascading Failures in Smart Grids under Random, Targeted and Adaptive
Attacks
- URL: http://arxiv.org/abs/2206.12735v1
- Date: Sat, 25 Jun 2022 21:38:31 GMT
- Title: Cascading Failures in Smart Grids under Random, Targeted and Adaptive
Attacks
- Authors: Sushmita Ruj and Arindam Pal
- Abstract summary: We study cascading failures in smart grids, where an attacker selectively compromises the nodes with probabilities proportional to their degrees, betweenness, or clustering coefficient.
We show that networks disintegrate faster for targeted attacks compared to random attacks.
An adversary has an advantage in this adaptive approach, compared to compromising the same number of nodes all at once.
- Score: 4.968545158985657
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study cascading failures in smart grids, where an attacker selectively
compromises the nodes with probabilities proportional to their degrees,
betweenness, or clustering coefficient. This implies that nodes with high
degrees, betweenness, or clustering coefficients are attacked with higher
probability. We mathematically and experimentally analyze the sizes of the
giant components of the networks under different types of targeted attacks, and
compare the results with the corresponding sizes under random attacks. We show
that networks disintegrate faster for targeted attacks compared to random
attacks. A targeted attack on a small fraction of high degree nodes
disintegrates one or both of the networks, whereas both the networks contain
giant components for random attack on the same fraction of nodes. An important
observation is that an attacker has an advantage if it compromises nodes based
on their betweenness, rather than based on degree or clustering coefficient.
We next study adaptive attacks, where an attacker compromises nodes in
rounds. Here, some nodes are compromised in each round based on their degree,
betweenness or clustering coefficients, instead of compromising all nodes
together. In this case, the degree, betweenness, or clustering coefficient is
calculated before the start of each round, instead of at the beginning. We show
experimentally that an adversary has an advantage in this adaptive approach,
compared to compromising the same number of nodes all at once.
Related papers
- Minimum Topology Attacks for Graph Neural Networks [70.17791814425148]
Graph Neural Networks (GNNs) have received significant attention for their robustness to adversarial topology attacks.
We propose a new type of topology attack, named minimum-budget topology attack, aiming to adaptively find the minimum perturbation sufficient for a successful attack on each node.
arXiv Detail & Related papers (2024-03-05T07:29:12Z) - Secure Deep Learning-based Distributed Intelligence on Pocket-sized
Drones [75.80952211739185]
Palm-sized nano-drones are an appealing class of edge nodes, but their limited computational resources prevent running large deep-learning models onboard.
Adopting an edge-fog computational paradigm, we can offload part of the computation to the fog; however, this poses security concerns if the fog node, or the communication link, can not be trusted.
We propose a novel distributed edge-fog execution scheme that validates fog computation by redundantly executing a random subnetwork aboard our nano-drone.
arXiv Detail & Related papers (2023-07-04T08:29:41Z) - Dink-Net: Neural Clustering on Large Graphs [59.10189693120368]
A deep graph clustering method (Dink-Net) is proposed with the idea of dilation and shrink.
By discriminating nodes, whether being corrupted by augmentations, representations are learned in a self-supervised manner.
The clustering distribution is optimized by minimizing the proposed cluster dilation loss and cluster shrink loss.
Compared to the runner-up, Dink-Net 9.62% achieves NMI improvement on the ogbn-papers100M dataset with 111 million nodes and 1.6 billion edges.
arXiv Detail & Related papers (2023-05-28T15:33:24Z) - Towards Reasonable Budget Allocation in Untargeted Graph Structure
Attacks via Gradient Debias [50.628150015907565]
Cross-entropy loss function is used to evaluate perturbation schemes in classification tasks.
Previous methods use negative cross-entropy loss as the attack objective in attacking node-level classification models.
This paper argues about the previous unreasonable attack objective from the perspective of budget allocation.
arXiv Detail & Related papers (2023-03-29T13:02:02Z) - Collective Robustness Certificates: Exploiting Interdependence in Graph
Neural Networks [71.78900818931847]
In tasks like node classification, image segmentation, and named-entity recognition we have a classifier that simultaneously outputs multiple predictions.
Existing adversarial robustness certificates consider each prediction independently and are thus overly pessimistic for such tasks.
We propose the first collective robustness certificate which computes the number of predictions that are simultaneously guaranteed to remain stable under perturbation.
arXiv Detail & Related papers (2023-02-06T14:46:51Z) - Sparse Vicious Attacks on Graph Neural Networks [3.246307337376473]
This work focuses on a specific, white-box attack to GNN-based link prediction models.
We propose SAVAGE, a novel framework and a method to mount this type of link prediction attacks.
Experiments conducted on real-world and synthetic datasets demonstrate that adversarial attacks implemented through SAVAGE indeed achieve high attack success rate.
arXiv Detail & Related papers (2022-09-20T12:51:24Z) - SSSNET: Semi-Supervised Signed Network Clustering [4.895808607591299]
We introduce a novel probabilistic balanced normalized cut loss for training nodes in a GNN framework for semi-supervised signed network clustering, called SSSNET.
The main novelty approach is a new take on the role of social balance theory for signed network embeddings.
arXiv Detail & Related papers (2021-10-13T10:36:37Z) - Query-based Adversarial Attacks on Graph with Fake Nodes [32.67989796394633]
We propose a novel adversarial attack by introducing a set of fake nodes to the original graph.
Specifically, we query the victim model for each victim node to acquire their most adversarial feature.
Our attack is performed in a practical and unnoticeable manner.
arXiv Detail & Related papers (2021-09-27T14:19:17Z) - Sparse and Imperceptible Adversarial Attack via a Homotopy Algorithm [93.80082636284922]
Sparse adversarial attacks can fool deep networks (DNNs) by only perturbing a few pixels.
Recent efforts combine it with another l_infty perturbation on magnitudes.
We propose a homotopy algorithm to tackle the sparsity and neural perturbation framework.
arXiv Detail & Related papers (2021-06-10T20:11:36Z) - Towards More Practical Adversarial Attacks on Graph Neural Networks [14.78539966828287]
We study the black-box attacks on graph neural networks (GNNs) under a novel and realistic constraint.
We show that the structural inductive biases of GNN models can be an effective source for this type of attacks.
arXiv Detail & Related papers (2020-06-09T05:27:39Z) - Indirect Adversarial Attacks via Poisoning Neighbors for Graph
Convolutional Networks [0.76146285961466]
Abusing graph convolutions, a node's classification result can be influenced by poisoning its neighbors.
We generate strong adversarial perturbations which are effective on not only one-hop neighbors, but more far from the target.
Our proposed method shows 99% attack success rate within two-hops from the target in two datasets.
arXiv Detail & Related papers (2020-02-19T05:44:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.