Robust Node Representation Learning via Graph Variational Diffusion
Networks
- URL: http://arxiv.org/abs/2312.10903v1
- Date: Mon, 18 Dec 2023 03:18:53 GMT
- Title: Robust Node Representation Learning via Graph Variational Diffusion
Networks
- Authors: Jun Zhuang, Mohammad Al Hasan
- Abstract summary: In recent years, compelling evidence has revealed that GNN-based node representation learning can be substantially deteriorated by perturbations in a graph structure.
To learn robust node representation in the presence of perturbations, various works have been proposed to safeguard GNNs.
We propose the Graph Variational Diffusion Network (GVDN), a new node encoder that effectively manipulates Gaussian noise to safeguard robustness on perturbed graphs.
- Score: 7.335425547621226
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Node representation learning by using Graph Neural Networks (GNNs) has been
widely explored. However, in recent years, compelling evidence has revealed
that GNN-based node representation learning can be substantially deteriorated
by delicately-crafted perturbations in a graph structure. To learn robust node
representation in the presence of perturbations, various works have been
proposed to safeguard GNNs. Within these existing works, Bayesian label
transition has been proven to be more effective, but this method is extensively
reliant on a well-built prior distribution. The variational inference could
address this limitation by sampling the latent node embedding from a Gaussian
prior distribution. Besides, leveraging the Gaussian distribution (noise) in
hidden layers is an appealing strategy to strengthen the robustness of GNNs.
However, our experiments indicate that such a strategy can cause over-smoothing
issues during node aggregation. In this work, we propose the Graph Variational
Diffusion Network (GVDN), a new node encoder that effectively manipulates
Gaussian noise to safeguard robustness on perturbed graphs while alleviating
over-smoothing issues through two mechanisms: Gaussian diffusion and node
embedding propagation. Thanks to these two mechanisms, our model can generate
robust node embeddings for recovery. Specifically, we design a retraining
mechanism using the generated node embedding to recover the performance of node
classifications in the presence of perturbations. The experiments verify the
effectiveness of our proposed model across six public datasets.
Related papers
- Bundle Neural Networks for message diffusion on graphs [10.018379001231356]
We show that Bundle Neural Networks (BuNNs) can approximate any feature transformation over nodes on any graphs given injective positional encodings.
We also prove that BuNNs can approximate any feature transformation over nodes on any family of graphs given injective positional encodings, resulting in universal node-level expressivity.
arXiv Detail & Related papers (2024-05-24T13:28:48Z) - Graph Elimination Networks [8.806990624643333]
Graph Neural Networks (GNNs) are widely applied across various domains, yet they perform poorly in deep layers.
We show that the root cause of GNNs' performance degradation in deep layers lies in ineffective neighborhood feature propagation.
We introduce Graph Elimination Networks (GENs), which employ a specific algorithm to eliminate redundancies during neighborhood propagation.
arXiv Detail & Related papers (2024-01-02T14:58:59Z) - Degree-based stratification of nodes in Graph Neural Networks [66.17149106033126]
We modify the Graph Neural Network (GNN) architecture so that the weight matrices are learned, separately, for the nodes in each group.
This simple-to-implement modification seems to improve performance across datasets and GNN methods.
arXiv Detail & Related papers (2023-12-16T14:09:23Z) - NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - DEGREE: Decomposition Based Explanation For Graph Neural Networks [55.38873296761104]
We propose DEGREE to provide a faithful explanation for GNN predictions.
By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction.
We also design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods.
arXiv Detail & Related papers (2023-05-22T10:29:52Z) - Distributional Signals for Node Classification in Graph Neural Networks [36.30743671968087]
In graph neural networks (GNNs) both node features and labels are examples of graph signals, a key notion in graph signal processing (GSP)
In our framework, we work with the distributions of node labels instead of their values and propose notions of smoothness and non-uniformity of such distributional graph signals.
We then propose a general regularization method for GNNs that allows us to encode distributional smoothness and non-uniformity of the model output in semi-supervised node classification tasks.
arXiv Detail & Related papers (2023-04-07T06:54:42Z) - Resisting Graph Adversarial Attack via Cooperative Homophilous
Augmentation [60.50994154879244]
Recent studies show that Graph Neural Networks are vulnerable and easily fooled by small perturbations.
In this work, we focus on the emerging but critical attack, namely, Graph Injection Attack.
We propose a general defense framework CHAGNN against GIA through cooperative homophilous augmentation of graph data and model.
arXiv Detail & Related papers (2022-11-15T11:44:31Z) - A Unified View on Graph Neural Networks as Graph Signal Denoising [49.980783124401555]
Graph Neural Networks (GNNs) have risen to prominence in learning representations for graph structured data.
In this work, we establish mathematically that the aggregation processes in a group of representative GNN models can be regarded as solving a graph denoising problem.
We instantiate a novel GNN model, ADA-UGNN, derived from UGNN, to handle graphs with adaptive smoothness across nodes.
arXiv Detail & Related papers (2020-10-05T04:57:18Z) - Stochastic Graph Recurrent Neural Network [6.656993023468793]
We propose SGRNN, a novel neural architecture that applies latent variables to simultaneously capture evolution in node attributes and topology.
Specifically, deterministic states are separated from states in the iterative process to suppress mutual interference.
Experiments on real-world datasets demonstrate the effectiveness of the proposed model.
arXiv Detail & Related papers (2020-09-01T16:14:30Z) - Graph Convolutional Neural Networks with Node Transition
Probability-based Message Passing and DropNode Regularization [32.260055351563324]
Graph convolutional neural networks (GCNNs) have received much attention recently, owing to their capability in handling graph-structured data.
This work presents a new method to improve the message passing process based on node transition probabilities.
We also propose a novel regularization method termed DropNode to address the over-fitting and over-smoothing issues simultaneously.
arXiv Detail & Related papers (2020-08-28T10:51:03Z) - Binarized Graph Neural Network [65.20589262811677]
We develop a binarized graph neural network to learn the binary representations of the nodes with binary network parameters.
Our proposed method can be seamlessly integrated into the existing GNN-based embedding approaches.
Experiments indicate that the proposed binarized graph neural network, namely BGN, is orders of magnitude more efficient in terms of both time and space.
arXiv Detail & Related papers (2020-04-19T09:43:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.