GraphHop++: New Insights into GraphHop and Its Enhancement
- URL: http://arxiv.org/abs/2204.08646v1
- Date: Tue, 19 Apr 2022 03:58:47 GMT
- Title: GraphHop++: New Insights into GraphHop and Its Enhancement
- Authors: Tian Xie, Rajgopal Kannan, C.-C. Jay Kuo
- Abstract summary: An enhanced label propagation (LP) method called GraphHop has been proposed recently.
It outperforms graph convolutional networks (GCNs) in the semi-supervised node classification task on various networks.
We show that GraphHop offers an alternate optimization to a certain regularization problem defined on graphs.
- Score: 37.61655151222875
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: An enhanced label propagation (LP) method called GraphHop has been proposed
recently. It outperforms graph convolutional networks (GCNs) in the
semi-supervised node classification task on various networks. Although the
performance of GraphHop was explained intuitively with joint node attributes
and labels smoothening, its rigorous mathematical treatment is lacking. In this
paper, new insights into GraphHop are provided by analyzing it from a
constrained optimization viewpoint. We show that GraphHop offers an alternate
optimization to a certain regularization problem defined on graphs. Based on
this interpretation, we propose two ideas to improve GraphHop furthermore,
which leads to GraphHop++. We conduct extensive experiments to demonstrate the
effectiveness and efficiency of GraphHop++. It is observed that GraphHop++
outperforms all other benchmarking methods, including GraphHop, consistently on
five test datasets as well as an object recognition task at extremely low label
rates (i.e., 1, 2, 4, 8, 16, and 20 labeled samples per class).
Related papers
- GSINA: Improving Subgraph Extraction for Graph Invariant Learning via
Graph Sinkhorn Attention [52.67633391931959]
Graph invariant learning (GIL) has been an effective approach to discovering the invariant relationships between graph data and its labels.
We propose a novel graph attention mechanism called Graph Sinkhorn Attention (GSINA)
GSINA is able to obtain meaningful, differentiable invariant subgraphs with controllable sparsity and softness.
arXiv Detail & Related papers (2024-02-11T12:57:16Z) - Graph Summarization with Graph Neural Networks [2.449909275410288]
We use Graph Neural Networks to represent large graphs in a structured and compact way.
We compare different GNNs with a standard multi-layer perceptron (MLP) and Bloom filter as non-neural method.
Our results show that the performance of GNNs are close to each other.
arXiv Detail & Related papers (2022-03-11T13:45:34Z) - Edge but not Least: Cross-View Graph Pooling [76.71497833616024]
This paper presents a cross-view graph pooling (Co-Pooling) method to better exploit crucial graph structure information.
Through cross-view interaction, edge-view pooling and node-view pooling seamlessly reinforce each other to learn more informative graph-level representations.
arXiv Detail & Related papers (2021-09-24T08:01:23Z) - GLAM: Graph Learning by Modeling Affinity to Labeled Nodes for Graph
Neural Networks [0.0]
We propose a semi-supervised graph learning method for cases when there are no graphs available.
This method learns a graph as a convex combination of the unsupervised kNN graph and a supervised label-affinity graph.
Our experiments suggest that this approach gives close to or better performance (up to 1.5%), while being simpler and faster (up to 70x) to train, than state-of-the-art graph learning methods.
arXiv Detail & Related papers (2021-02-20T17:56:52Z) - GraphHop: An Enhanced Label Propagation Method for Node Classification [34.073791157290614]
A scalable semi-supervised node classification method, called GraphHop, is proposed in this work.
Experimental results show that GraphHop outperforms state-of-the-art graph learning methods on a wide range of tasks.
arXiv Detail & Related papers (2021-01-07T02:10:20Z) - Inverse Graph Identification: Can We Identify Node Labels Given Graph
Labels? [89.13567439679709]
Graph Identification (GI) has long been researched in graph learning and is essential in certain applications.
This paper defines a novel problem dubbed Inverse Graph Identification (IGI)
We propose a simple yet effective method that makes the node-level message passing process using Graph Attention Network (GAT) under the protocol of GI.
arXiv Detail & Related papers (2020-07-12T12:06:17Z) - Multilevel Graph Matching Networks for Deep Graph Similarity Learning [79.3213351477689]
We propose a multi-level graph matching network (MGMN) framework for computing the graph similarity between any pair of graph-structured objects.
To compensate for the lack of standard benchmark datasets, we have created and collected a set of datasets for both the graph-graph classification and graph-graph regression tasks.
Comprehensive experiments demonstrate that MGMN consistently outperforms state-of-the-art baseline models on both the graph-graph classification and graph-graph regression tasks.
arXiv Detail & Related papers (2020-07-08T19:48:19Z) - Graph Pooling with Node Proximity for Hierarchical Representation
Learning [80.62181998314547]
We propose a novel graph pooling strategy that leverages node proximity to improve the hierarchical representation learning of graph data with their multi-hop topology.
Results show that the proposed graph pooling strategy is able to achieve state-of-the-art performance on a collection of public graph classification benchmark datasets.
arXiv Detail & Related papers (2020-06-19T13:09:44Z) - HopGAT: Hop-aware Supervision Graph Attention Networks for Sparsely
Labeled Graphs [7.1696593196695035]
This study proposes a hop-aware attention supervision mechanism for the node classification task.
Experiments also demonstrate the effectiveness of supervised attention coefficient and learning strategies.
arXiv Detail & Related papers (2020-04-09T02:27:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.