GCN-SE: Attention as Explainability for Node Classification in Dynamic
Graphs
- URL: http://arxiv.org/abs/2110.05598v1
- Date: Mon, 11 Oct 2021 20:30:35 GMT
- Title: GCN-SE: Attention as Explainability for Node Classification in Dynamic
Graphs
- Authors: Yucai Fan, Yuhang Yao, Carlee Joe-Wong
- Abstract summary: Graph Convolutional Networks (GCNs) are a popular method from graph representation learning.
We propose a new method, GCN-SE, that attaches a set of learnable attention weights to graph snapshots at different times.
We show that GCN-SE outperforms previously proposed node classification methods on a variety of graph datasets.
- Score: 20.330666300034338
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Graph Convolutional Networks (GCNs) are a popular method from graph
representation learning that have proved effective for tasks like node
classification tasks. Although typical GCN models focus on classifying nodes
within a static graph, several recent variants propose node classification in
dynamic graphs whose topologies and node attributes change over time, e.g.,
social networks with dynamic relationships, or literature citation networks
with changing co-authorships. These works, however, do not fully address the
challenge of flexibly assigning different importance to snapshots of the graph
at different times, which depending on the graph dynamics may have more or less
predictive power on the labels. We address this challenge by proposing a new
method, GCN-SE, that attaches a set of learnable attention weights to graph
snapshots at different times, inspired by Squeeze and Excitation Net (SE-Net).
We show that GCN-SE outperforms previously proposed node classification methods
on a variety of graph datasets. To verify the effectiveness of the attention
weight in determining the importance of different graph snapshots, we adapt
perturbation-based methods from the field of explainable machine learning to
graphical settings and evaluate the correlation between the attention weights
learned by GCN-SE and the importance of different snapshots over time. These
experiments demonstrate that GCN-SE can in fact identify different snapshots'
predictive power for dynamic node classification.
Related papers
- Self-Attention Empowered Graph Convolutional Network for Structure
Learning and Node Embedding [5.164875580197953]
In representation learning on graph-structured data, many popular graph neural networks (GNNs) fail to capture long-range dependencies.
This paper proposes a novel graph learning framework called the graph convolutional network with self-attention (GCN-SA)
The proposed scheme exhibits an exceptional generalization capability in node-level representation learning.
arXiv Detail & Related papers (2024-03-06T05:00:31Z) - Supervised Attention Using Homophily in Graph Neural Networks [26.77596449192451]
We propose a new technique to encourage higher attention scores between nodes that share the same class label.
We evaluate the proposed method on several node classification datasets demonstrating increased performance over standard baseline models.
arXiv Detail & Related papers (2023-07-11T12:43:23Z) - Discovering the Representation Bottleneck of Graph Neural Networks from
Multi-order Interactions [51.597480162777074]
Graph neural networks (GNNs) rely on the message passing paradigm to propagate node features and build interactions.
Recent works point out that different graph learning tasks require different ranges of interactions between nodes.
We study two common graph construction methods in scientific domains, i.e., emphK-nearest neighbor (KNN) graphs and emphfully-connected (FC) graphs.
arXiv Detail & Related papers (2022-05-15T11:38:14Z) - DyGCN: Dynamic Graph Embedding with Graph Convolutional Network [25.02329024926518]
We propose an efficient dynamic graph embedding approach, Dynamic Graph Convolutional Network (DyGCN)
Our model can update the node embeddings in a time-saving and performance-preserving way.
arXiv Detail & Related papers (2021-04-07T07:28:44Z) - Attention-Driven Dynamic Graph Convolutional Network for Multi-Label
Image Recognition [53.17837649440601]
We propose an Attention-Driven Dynamic Graph Convolutional Network (ADD-GCN) to dynamically generate a specific graph for each image.
Experiments on public multi-label benchmarks demonstrate the effectiveness of our method.
arXiv Detail & Related papers (2020-12-05T10:10:12Z) - Sequential Graph Convolutional Network for Active Learning [53.99104862192055]
We propose a novel pool-based Active Learning framework constructed on a sequential Graph Convolution Network (GCN)
With a small number of randomly sampled images as seed labelled examples, we learn the parameters of the graph to distinguish labelled vs unlabelled nodes.
We exploit these characteristics of GCN to select the unlabelled examples which are sufficiently different from labelled ones.
arXiv Detail & Related papers (2020-06-18T00:55:10Z) - GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training [62.73470368851127]
Graph representation learning has emerged as a powerful technique for addressing real-world problems.
We design Graph Contrastive Coding -- a self-supervised graph neural network pre-training framework.
We conduct experiments on three graph learning tasks and ten graph datasets.
arXiv Detail & Related papers (2020-06-17T16:18:35Z) - Structural Temporal Graph Neural Networks for Anomaly Detection in
Dynamic Graphs [54.13919050090926]
We propose an end-to-end structural temporal Graph Neural Network model for detecting anomalous edges in dynamic graphs.
In particular, we first extract the $h$-hop enclosing subgraph centered on the target edge and propose the node labeling function to identify the role of each node in the subgraph.
Based on the extracted features, we utilize Gated recurrent units (GRUs) to capture the temporal information for anomaly detection.
arXiv Detail & Related papers (2020-05-15T09:17:08Z) - K-Core based Temporal Graph Convolutional Network for Dynamic Graphs [19.237377882738063]
We propose a novel k-core based temporal graph convolutional network, the CTGCN, to learn node representations for dynamic graphs.
In contrast to previous dynamic graph embedding methods, CTGCN can preserve both local connective proximity and global structural similarity.
Experimental results on 7 real-world graphs demonstrate that the CTGCN outperforms existing state-of-the-art graph embedding methods in several tasks.
arXiv Detail & Related papers (2020-03-22T14:15:27Z) - Tensor Graph Convolutional Networks for Multi-relational and Robust
Learning [74.05478502080658]
This paper introduces a tensor-graph convolutional network (TGCN) for scalable semi-supervised learning (SSL) from data associated with a collection of graphs, that are represented by a tensor.
The proposed architecture achieves markedly improved performance relative to standard GCNs, copes with state-of-the-art adversarial attacks, and leads to remarkable SSL performance over protein-to-protein interaction networks.
arXiv Detail & Related papers (2020-03-15T02:33:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.