HopGAT: Hop-aware Supervision Graph Attention Networks for Sparsely
Labeled Graphs
- URL: http://arxiv.org/abs/2004.04333v1
- Date: Thu, 9 Apr 2020 02:27:15 GMT
- Title: HopGAT: Hop-aware Supervision Graph Attention Networks for Sparsely
Labeled Graphs
- Authors: Chaojie Ji, Ruxin Wang, Rongxiang Zhu, Yunpeng Cai, Hongyan Wu
- Abstract summary: This study proposes a hop-aware attention supervision mechanism for the node classification task.
Experiments also demonstrate the effectiveness of supervised attention coefficient and learning strategies.
- Score: 7.1696593196695035
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Due to the cost of labeling nodes, classifying a node in a sparsely labeled
graph while maintaining the prediction accuracy deserves attention. The key
point is how the algorithm learns sufficient information from more neighbors
with different hop distances. This study first proposes a hop-aware attention
supervision mechanism for the node classification task. A simulated annealing
learning strategy is then adopted to balance two learning tasks, node
classification and the hop-aware attention coefficients, along the training
timeline. Compared with state-of-the-art models, the experimental results
proved the superior effectiveness of the proposed Hop-aware Supervision Graph
Attention Networks (HopGAT) model. Especially, for the protein-protein
interaction network, in a 40% labeled graph, the performance loss is only 3.9%,
from 98.5% to 94.6%, compared to the fully labeled graph. Extensive experiments
also demonstrate the effectiveness of supervised attention coefficient and
learning strategies.
Related papers
- Supervised Attention Using Homophily in Graph Neural Networks [26.77596449192451]
We propose a new technique to encourage higher attention scores between nodes that share the same class label.
We evaluate the proposed method on several node classification datasets demonstrating increased performance over standard baseline models.
arXiv Detail & Related papers (2023-07-11T12:43:23Z) - State of the Art and Potentialities of Graph-level Learning [54.68482109186052]
Graph-level learning has been applied to many tasks including comparison, regression, classification, and more.
Traditional approaches to learning a set of graphs rely on hand-crafted features, such as substructures.
Deep learning has helped graph-level learning adapt to the growing scale of graphs by extracting features automatically and encoding graphs into low-dimensional representations.
arXiv Detail & Related papers (2023-01-14T09:15:49Z) - CGMN: A Contrastive Graph Matching Network for Self-Supervised Graph
Similarity Learning [65.1042892570989]
We propose a contrastive graph matching network (CGMN) for self-supervised graph similarity learning.
We employ two strategies, namely cross-view interaction and cross-graph interaction, for effective node representation learning.
We transform node representations into graph-level representations via pooling operations for graph similarity computation.
arXiv Detail & Related papers (2022-05-30T13:20:26Z) - How to Find Your Friendly Neighborhood: Graph Attention Design with
Self-Supervision [16.86132592140062]
We propose a self-supervised graph attention network (SuperGAT) for noisy graphs.
We exploit two attention forms compatible with a self-supervised task to predict edges.
By encoding edges, SuperGAT learns more expressive attention in distinguishing mislinked neighbors.
arXiv Detail & Related papers (2022-04-11T05:45:09Z) - Edge but not Least: Cross-View Graph Pooling [76.71497833616024]
This paper presents a cross-view graph pooling (Co-Pooling) method to better exploit crucial graph structure information.
Through cross-view interaction, edge-view pooling and node-view pooling seamlessly reinforce each other to learn more informative graph-level representations.
arXiv Detail & Related papers (2021-09-24T08:01:23Z) - Self-Supervised Graph Learning with Proximity-based Views and Channel
Contrast [4.761137180081091]
Graph neural networks (GNNs) use neighborhood aggregation as a core component that results in feature smoothing among nodes in proximity.
To tackle this problem, we strengthen the graph with two additional graph views, in which nodes are directly linked to those with the most similar features or local structures.
We propose a method that aims to maximize the agreement between representations across generated views and the original graph.
arXiv Detail & Related papers (2021-06-07T15:38:36Z) - Graph Decoupling Attention Markov Networks for Semi-supervised Graph
Node Classification [38.52231889960877]
Graph neural networks (GNN) have been ubiquitous in graph learning tasks such as node classification.
In this paper, we consider the label dependency of graph nodes and propose a decoupling attention mechanism to learn both hard and soft attention.
arXiv Detail & Related papers (2021-04-28T11:44:13Z) - Hop-Hop Relation-aware Graph Neural Networks [15.15806320256929]
We propose a new model, Hop-Hop Relation-aware Graph Neural Network (HHR-GNN), to unify representation learning for homogeneous and heterogeneous graphs.
HHR-GNN learns a personalized receptive field for each node by leveraging knowledge graph embedding to learn relation scores between the central node's representations at different hops.
arXiv Detail & Related papers (2020-12-21T06:58:38Z) - Line Graph Neural Networks for Link Prediction [71.00689542259052]
We consider the graph link prediction task, which is a classic graph analytical problem with many real-world applications.
In this formalism, a link prediction problem is converted to a graph classification task.
We propose to seek a radically different and novel path by making use of the line graphs in graph theory.
In particular, each node in a line graph corresponds to a unique edge in the original graph. Therefore, link prediction problems in the original graph can be equivalently solved as a node classification problem in its corresponding line graph, instead of a graph classification task.
arXiv Detail & Related papers (2020-10-20T05:54:31Z) - Multi-hop Attention Graph Neural Network [70.21119504298078]
Multi-hop Attention Graph Neural Network (MAGNA) is a principled way to incorporate multi-hop context information into every layer of attention computation.
We show that MAGNA captures large-scale structural information in every layer, and has a low-pass effect that eliminates noisy high-frequency information from graph data.
arXiv Detail & Related papers (2020-09-29T22:41:19Z) - Spectral Graph Attention Network with Fast Eigen-approximation [103.93113062682633]
Spectral Graph Attention Network (SpGAT) learns representations for different frequency components regarding weighted filters and graph wavelets bases.
Fast approximation variant SpGAT-Cheby is proposed to reduce the computational cost brought by the eigen-decomposition.
We thoroughly evaluate the performance of SpGAT and SpGAT-Cheby in semi-supervised node classification tasks.
arXiv Detail & Related papers (2020-03-16T21:49:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.