A Systematic Evaluation of Node Embedding Robustness
- URL: http://arxiv.org/abs/2209.08064v2
- Date: Mon, 19 Sep 2022 10:05:55 GMT
- Title: A Systematic Evaluation of Node Embedding Robustness
- Authors: Alexandru Mara, Jefrey Lijffijt, Stephan G\"unnemann, Tijl De Bie
- Abstract summary: We assess the empirical robustness of node embedding models to random and adversarial poisoning attacks.
We compare edge addition, deletion and rewiring strategies computed using network properties as well as node labels.
We found that node classification suffers from higher performance degradation as opposed to network reconstruction.
- Score: 77.29026280120277
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Node embedding methods map network nodes to low dimensional vectors that can
be subsequently used in a variety of downstream prediction tasks. The
popularity of these methods has significantly increased in recent years, yet,
their robustness to perturbations of the input data is still poorly understood.
In this paper, we assess the empirical robustness of node embedding models to
random and adversarial poisoning attacks. Our systematic evaluation covers
representative embedding methods based on Skip-Gram, matrix factorization, and
deep neural networks. We compare edge addition, deletion and rewiring
strategies computed using network properties as well as node labels. We also
investigate the effect of label homophily and heterophily on robustness. We
report qualitative results via embedding visualization and quantitative results
in terms of downstream node classification and network reconstruction
performances. We found that node classification suffers from higher performance
degradation as opposed to network reconstruction, and that degree-based and
label-based attacks are on average the most damaging.
Related papers
- Robust Subgraph Learning by Monitoring Early Training Representations [5.524804393257921]
Graph neural networks (GNNs) have attracted significant attention for their outstanding performance in graph learning and node classification tasks.
Their vulnerability to adversarial attacks, particularly through susceptible nodes, poses a challenge in decision-making.
We introduce the novel technique SHERD (Subgraph Learning Hale through Early Training Representation Distances) to address both performance and adversarial robustness in graph input.
arXiv Detail & Related papers (2024-03-14T22:25:37Z) - Deep Graph Neural Networks via Posteriori-Sampling-based Node-Adaptive Residual Module [65.81781176362848]
Graph Neural Networks (GNNs) can learn from graph-structured data through neighborhood information aggregation.
As the number of layers increases, node representations become indistinguishable, which is known as over-smoothing.
We propose a textbfPosterior-Sampling-based, Node-distinguish Residual module (PSNR).
arXiv Detail & Related papers (2023-05-09T12:03:42Z) - Resisting Graph Adversarial Attack via Cooperative Homophilous
Augmentation [60.50994154879244]
Recent studies show that Graph Neural Networks are vulnerable and easily fooled by small perturbations.
In this work, we focus on the emerging but critical attack, namely, Graph Injection Attack.
We propose a general defense framework CHAGNN against GIA through cooperative homophilous augmentation of graph data and model.
arXiv Detail & Related papers (2022-11-15T11:44:31Z) - Efficient and Robust Classification for Sparse Attacks [34.48667992227529]
We consider perturbations bounded by the $ell$--norm, which have been shown as effective attacks in the domains of image-recognition, natural language processing, and malware-detection.
We propose a novel defense method that consists of "truncation" and "adrial training"
Motivated by the insights we obtain, we extend these components to neural network classifiers.
arXiv Detail & Related papers (2022-01-23T21:18:17Z) - SSSNET: Semi-Supervised Signed Network Clustering [4.895808607591299]
We introduce a novel probabilistic balanced normalized cut loss for training nodes in a GNN framework for semi-supervised signed network clustering, called SSSNET.
The main novelty approach is a new take on the role of social balance theory for signed network embeddings.
arXiv Detail & Related papers (2021-10-13T10:36:37Z) - An Orthogonal Classifier for Improving the Adversarial Robustness of
Neural Networks [21.13588742648554]
Recent efforts have shown that imposing certain modifications on classification layer can improve the robustness of the neural networks.
We explicitly construct a dense orthogonal weight matrix whose entries have the same magnitude, leading to a novel robust classifier.
Our method is efficient and competitive to many state-of-the-art defensive approaches.
arXiv Detail & Related papers (2021-05-19T13:12:14Z) - Unveiling Anomalous Edges and Nominal Connectivity of Attributed
Networks [53.56901624204265]
The present work deals with uncovering anomalous edges in attributed graphs using two distinct formulations with complementary strengths.
The first relies on decomposing the graph data matrix into low rank plus sparse components to improve markedly performance.
The second broadens the scope of the first by performing robust recovery of the unperturbed graph, which enhances the anomaly identification performance.
arXiv Detail & Related papers (2021-04-17T20:00:40Z) - And/or trade-off in artificial neurons: impact on adversarial robustness [91.3755431537592]
Presence of sufficient number of OR-like neurons in a network can lead to classification brittleness and increased vulnerability to adversarial attacks.
We define AND-like neurons and propose measures to increase their proportion in the network.
Experimental results on the MNIST dataset suggest that our approach holds promise as a direction for further exploration.
arXiv Detail & Related papers (2021-02-15T08:19:05Z) - Improve Adversarial Robustness via Weight Penalization on Classification
Layer [20.84248493946059]
Deep neural networks are vulnerable to adversarial attacks.
Recent studies show that well-designed classification parts can lead to better robustness.
We develop a novel light-weight-penalized defensive method.
arXiv Detail & Related papers (2020-10-08T08:57:57Z) - Understanding and Diagnosing Vulnerability under Adversarial Attacks [62.661498155101654]
Deep Neural Networks (DNNs) are known to be vulnerable to adversarial attacks.
We propose a novel interpretability method, InterpretGAN, to generate explanations for features used for classification in latent variables.
We also design the first diagnostic method to quantify the vulnerability contributed by each layer.
arXiv Detail & Related papers (2020-07-17T01:56:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.