Class-Attentive Diffusion Network for Semi-Supervised Classification
- URL: http://arxiv.org/abs/2006.10222v3
- Date: Wed, 30 Dec 2020 04:35:37 GMT
- Title: Class-Attentive Diffusion Network for Semi-Supervised Classification
- Authors: Jongin Lim, Daeho Um, Hyung Jin Chang, Dae Ung Jo, Jin Young Choi
- Abstract summary: Class-Attentive Diffusion Network (CAD-Net) is a graph neural network for semi-supervised classification.
In this paper, we propose a new aggregation scheme that adaptively aggregates nodes probably of the same class among K-hop neighbors.
Our experiments on seven benchmark datasets consistently demonstrate the efficacy of the proposed method.
- Score: 27.433021864424266
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, graph neural networks for semi-supervised classification have been
widely studied. However, existing methods only use the information of limited
neighbors and do not deal with the inter-class connections in graphs. In this
paper, we propose Adaptive aggregation with Class-Attentive Diffusion (AdaCAD),
a new aggregation scheme that adaptively aggregates nodes probably of the same
class among K-hop neighbors. To this end, we first propose a novel stochastic
process, called Class-Attentive Diffusion (CAD), that strengthens attention to
intra-class nodes and attenuates attention to inter-class nodes. In contrast to
the existing diffusion methods with a transition matrix determined solely by
the graph structure, CAD considers both the node features and the graph
structure with the design of our class-attentive transition matrix that
utilizes a classifier. Then, we further propose an adaptive update scheme that
leverages different reflection ratios of the diffusion result for each node
depending on the local class-context. As the main advantage, AdaCAD alleviates
the problem of undesired mixing of inter-class features caused by discrepancies
between node labels and the graph topology. Built on AdaCAD, we construct a
simple model called Class-Attentive Diffusion Network (CAD-Net). Extensive
experiments on seven benchmark datasets consistently demonstrate the efficacy
of the proposed method and our CAD-Net significantly outperforms the
state-of-the-art methods. Code is available at
https://github.com/ljin0429/CAD-Net.
Related papers
- Graph Transformer GANs with Graph Masked Modeling for Architectural
Layout Generation [153.92387500677023]
We present a novel graph Transformer generative adversarial network (GTGAN) to learn effective graph node relations.
The proposed graph Transformer encoder combines graph convolutions and self-attentions in a Transformer to model both local and global interactions.
We also propose a novel self-guided pre-training method for graph representation learning.
arXiv Detail & Related papers (2024-01-15T14:36:38Z) - Supervised Attention Using Homophily in Graph Neural Networks [26.77596449192451]
We propose a new technique to encourage higher attention scores between nodes that share the same class label.
We evaluate the proposed method on several node classification datasets demonstrating increased performance over standard baseline models.
arXiv Detail & Related papers (2023-07-11T12:43:23Z) - NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - EGRC-Net: Embedding-induced Graph Refinement Clustering Network [66.44293190793294]
We propose a novel graph clustering network called Embedding-Induced Graph Refinement Clustering Network (EGRC-Net)
EGRC-Net effectively utilizes the learned embedding to adaptively refine the initial graph and enhance the clustering performance.
Our proposed methods consistently outperform several state-of-the-art approaches.
arXiv Detail & Related papers (2022-11-19T09:08:43Z) - Graph Neural Network with Curriculum Learning for Imbalanced Node
Classification [21.085314408929058]
Graph Neural Network (GNN) is an emerging technique for graph-based learning tasks such as node classification.
In this work, we reveal the vulnerability of GNN to the imbalance of node labels.
We propose a novel graph neural network framework with curriculum learning (GNN-CL) consisting of two modules.
arXiv Detail & Related papers (2022-02-05T10:46:11Z) - Structure-Aware Label Smoothing for Graph Neural Networks [39.97741949184259]
Representing a label distribution as a one-hot vector is a common practice in training node classification models.
We propose a novel SALS (textitStructure-Aware Label Smoothing) method as an enhancement component to popular node classification models.
arXiv Detail & Related papers (2021-12-01T13:48:58Z) - Graph Neural Networks with Feature and Structure Aware Random Walk [7.143879014059894]
We show that in typical heterphilous graphs, the edges may be directed, and whether to treat the edges as is or simply make them undirected greatly affects the performance of the GNN models.
We develop a model that adaptively learns the directionality of the graph, and exploits the underlying long-distance correlations between nodes.
arXiv Detail & Related papers (2021-11-19T08:54:21Z) - Attention-driven Graph Clustering Network [49.040136530379094]
We propose a novel deep clustering method named Attention-driven Graph Clustering Network (AGCN)
AGCN exploits a heterogeneous-wise fusion module to dynamically fuse the node attribute feature and the topological graph feature.
AGCN can jointly perform feature learning and cluster assignment in an unsupervised fashion.
arXiv Detail & Related papers (2021-08-12T02:30:38Z) - Learning Hierarchical Graph Neural Networks for Image Clustering [81.5841862489509]
We propose a hierarchical graph neural network (GNN) model that learns how to cluster a set of images into an unknown number of identities.
Our hierarchical GNN uses a novel approach to merge connected components predicted at each level of the hierarchy to form a new graph at the next level.
arXiv Detail & Related papers (2021-07-03T01:28:42Z) - Edge-Labeling based Directed Gated Graph Network for Few-shot Learning [8.042733995485218]
We propose an edge-labeling-based directed gated graph network (DGGN) for few-shot learning.
DGGN is composed of a gated node aggregation module and an improved gated recurrent unit (GRU) based edge update module.
Experiment results conducted on two benchmark datasets show that our DGGN achieves a comparable performance to the-state-of-art methods.
arXiv Detail & Related papers (2021-01-27T10:14:20Z) - Sequential Graph Convolutional Network for Active Learning [53.99104862192055]
We propose a novel pool-based Active Learning framework constructed on a sequential Graph Convolution Network (GCN)
With a small number of randomly sampled images as seed labelled examples, we learn the parameters of the graph to distinguish labelled vs unlabelled nodes.
We exploit these characteristics of GCN to select the unlabelled examples which are sufficiently different from labelled ones.
arXiv Detail & Related papers (2020-06-18T00:55:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.