GAIN: Graph Attention & Interaction Network for Inductive
Semi-Supervised Learning over Large-scale Graphs
- URL: http://arxiv.org/abs/2011.01393v1
- Date: Tue, 3 Nov 2020 00:20:24 GMT
- Title: GAIN: Graph Attention & Interaction Network for Inductive
Semi-Supervised Learning over Large-scale Graphs
- Authors: Yunpeng Weng and Xu Chen and Liang Chen and Wei Liu
- Abstract summary: Graph Neural Networks (GNNs) have led to state-of-the-art performance on a variety of machine learning tasks such as recommendation, node classification and link prediction.
Most existing GNN models exploit a single type of aggregator to aggregate neighboring nodes information.
We propose a novel graph neural network architecture, Graph Attention & Interaction Network (GAIN), for inductive learning on graphs.
- Score: 18.23435958000212
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Neural Networks (GNNs) have led to state-of-the-art performance on a
variety of machine learning tasks such as recommendation, node classification
and link prediction. Graph neural network models generate node embeddings by
merging nodes features with the aggregated neighboring nodes information. Most
existing GNN models exploit a single type of aggregator (e.g., mean-pooling) to
aggregate neighboring nodes information, and then add or concatenate the output
of aggregator to the current representation vector of the center node. However,
using only a single type of aggregator is difficult to capture the different
aspects of neighboring information and the simple addition or concatenation
update methods limit the expressive capability of GNNs. Not only that, existing
supervised or semi-supervised GNN models are trained based on the loss function
of the node label, which leads to the neglect of graph structure information.
In this paper, we propose a novel graph neural network architecture, Graph
Attention \& Interaction Network (GAIN), for inductive learning on graphs.
Unlike the previous GNN models that only utilize a single type of aggregation
method, we use multiple types of aggregators to gather neighboring information
in different aspects and integrate the outputs of these aggregators through the
aggregator-level attention mechanism. Furthermore, we design a graph
regularized loss to better capture the topological relationship of the nodes in
the graph. Additionally, we first present the concept of graph feature
interaction and propose a vector-wise explicit feature interaction mechanism to
update the node embeddings. We conduct comprehensive experiments on two
node-classification benchmarks and a real-world financial news dataset. The
experiments demonstrate our GAIN model outperforms current state-of-the-art
performances on all the tasks.
Related papers
- DGNN: Decoupled Graph Neural Networks with Structural Consistency
between Attribute and Graph Embedding Representations [62.04558318166396]
Graph neural networks (GNNs) demonstrate a robust capability for representation learning on graphs with complex structures.
A novel GNNs framework, dubbed Decoupled Graph Neural Networks (DGNN), is introduced to obtain a more comprehensive embedding representation of nodes.
Experimental results conducted on several graph benchmark datasets verify DGNN's superiority in node classification task.
arXiv Detail & Related papers (2024-01-28T06:43:13Z) - DEGREE: Decomposition Based Explanation For Graph Neural Networks [55.38873296761104]
We propose DEGREE to provide a faithful explanation for GNN predictions.
By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction.
We also design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods.
arXiv Detail & Related papers (2023-05-22T10:29:52Z) - Seq-HGNN: Learning Sequential Node Representation on Heterogeneous Graph [57.2953563124339]
We propose a novel heterogeneous graph neural network with sequential node representation, namely Seq-HGNN.
We conduct extensive experiments on four widely used datasets from Heterogeneous Graph Benchmark (HGB) and Open Graph Benchmark (OGB)
arXiv Detail & Related papers (2023-05-18T07:27:18Z) - Discovering the Representation Bottleneck of Graph Neural Networks from
Multi-order Interactions [51.597480162777074]
Graph neural networks (GNNs) rely on the message passing paradigm to propagate node features and build interactions.
Recent works point out that different graph learning tasks require different ranges of interactions between nodes.
We study two common graph construction methods in scientific domains, i.e., emphK-nearest neighbor (KNN) graphs and emphfully-connected (FC) graphs.
arXiv Detail & Related papers (2022-05-15T11:38:14Z) - Graph Ordering Attention Networks [22.468776559433614]
Graph Neural Networks (GNNs) have been successfully used in many problems involving graph-structured data.
We introduce the Graph Ordering Attention (GOAT) layer, a novel GNN component that captures interactions between nodes in a neighborhood.
GOAT layer demonstrates its increased performance in modeling graph metrics that capture complex information.
arXiv Detail & Related papers (2022-04-11T18:13:19Z) - Feature Correlation Aggregation: on the Path to Better Graph Neural
Networks [37.79964911718766]
Prior to the introduction of Graph Neural Networks (GNNs), modeling and analyzing irregular data, particularly graphs, was thought to be the Achilles' heel of deep learning.
This paper introduces a central node permutation variant function through a frustratingly simple and innocent-looking modification to the core operation of a GNN.
A tangible boost in performance of the model is observed where the model surpasses previous state-of-the-art results by a significant margin while employing fewer parameters.
arXiv Detail & Related papers (2021-09-20T05:04:26Z) - Explicit Pairwise Factorized Graph Neural Network for Semi-Supervised
Node Classification [59.06717774425588]
We propose the Explicit Pairwise Factorized Graph Neural Network (EPFGNN), which models the whole graph as a partially observed Markov Random Field.
It contains explicit pairwise factors to model output-output relations and uses a GNN backbone to model input-output relations.
We conduct experiments on various datasets, which shows that our model can effectively improve the performance for semi-supervised node classification on graphs.
arXiv Detail & Related papers (2021-07-27T19:47:53Z) - A Unified View on Graph Neural Networks as Graph Signal Denoising [49.980783124401555]
Graph Neural Networks (GNNs) have risen to prominence in learning representations for graph structured data.
In this work, we establish mathematically that the aggregation processes in a group of representative GNN models can be regarded as solving a graph denoising problem.
We instantiate a novel GNN model, ADA-UGNN, derived from UGNN, to handle graphs with adaptive smoothness across nodes.
arXiv Detail & Related papers (2020-10-05T04:57:18Z) - Multi-grained Semantics-aware Graph Neural Networks [13.720544777078642]
Graph Neural Networks (GNNs) are powerful techniques in representation learning for graphs.
This work proposes a unified model, AdamGNN, to interactively learn node and graph representations.
Experiments on 14 real-world graph datasets show that AdamGNN can significantly outperform 17 competing models on both node- and graph-wise tasks.
arXiv Detail & Related papers (2020-10-01T07:52:06Z) - Hierarchical Message-Passing Graph Neural Networks [12.207978823927386]
We propose a novel Hierarchical Message-passing Graph Neural Networks framework.
Key idea is generating a hierarchical structure that re-organises all nodes in a flat graph into multi-level super graphs.
We present the first model to implement this framework, termed Hierarchical Community-aware Graph Neural Network (HC-GNN)
arXiv Detail & Related papers (2020-09-08T13:11:07Z) - CAGNN: Cluster-Aware Graph Neural Networks for Unsupervised Graph
Representation Learning [19.432449825536423]
Unsupervised graph representation learning aims to learn low-dimensional node embeddings without supervision.
We present a novel cluster-aware graph neural network (CAGNN) model for unsupervised graph representation learning using self-supervised techniques.
arXiv Detail & Related papers (2020-09-03T13:57:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.