GLAM: Graph Learning by Modeling Affinity to Labeled Nodes for Graph
Neural Networks
- URL: http://arxiv.org/abs/2102.10403v1
- Date: Sat, 20 Feb 2021 17:56:52 GMT
- Title: GLAM: Graph Learning by Modeling Affinity to Labeled Nodes for Graph
Neural Networks
- Authors: Vijay Lingam, Arun Iyer, Rahul Ragesh
- Abstract summary: We propose a semi-supervised graph learning method for cases when there are no graphs available.
This method learns a graph as a convex combination of the unsupervised kNN graph and a supervised label-affinity graph.
Our experiments suggest that this approach gives close to or better performance (up to 1.5%), while being simpler and faster (up to 70x) to train, than state-of-the-art graph learning methods.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Neural Networks have shown excellent performance on semi-supervised
classification tasks. However, they assume access to a graph that may not be
often available in practice. In the absence of any graph, constructing
k-Nearest Neighbor (kNN) graphs from the given data have shown to give
improvements when used with GNNs over other semi-supervised methods. This paper
proposes a semi-supervised graph learning method for cases when there are no
graphs available. This method learns a graph as a convex combination of the
unsupervised kNN graph and a supervised label-affinity graph. The
label-affinity graph directly captures all the nodes' label-affinity with the
labeled nodes, i.e., how likely a node has the same label as the labeled nodes.
This affinity measure contrasts with the kNN graph where the metric measures
closeness in the feature space. Our experiments suggest that this approach
gives close to or better performance (up to 1.5%), while being simpler and
faster (up to 70x) to train, than state-of-the-art graph learning methods. We
also conduct several experiments to highlight the importance of individual
components and contrast them with state-of-the-art methods.
Related papers
- Learning on Large Graphs using Intersecting Communities [13.053266613831447]
MPNNs iteratively update each node's representation in an input graph by aggregating messages from the node's neighbors.
MPNNs might quickly become prohibitive for large graphs provided they are not very sparse.
We propose approximating the input graph as an intersecting community graph (ICG) -- a combination of intersecting cliques.
arXiv Detail & Related papers (2024-05-31T09:26:26Z) - Graph Mixup with Soft Alignments [49.61520432554505]
We study graph data augmentation by mixup, which has been used successfully on images.
We propose S-Mixup, a simple yet effective mixup method for graph classification by soft alignments.
arXiv Detail & Related papers (2023-06-11T22:04:28Z) - Semi-Supervised Hierarchical Graph Classification [54.25165160435073]
We study the node classification problem in the hierarchical graph where a 'node' is a graph instance.
We propose the Hierarchical Graph Mutual Information (HGMI) and present a way to compute HGMI with theoretical guarantee.
We demonstrate the effectiveness of this hierarchical graph modeling and the proposed SEAL-CI method on text and social network data.
arXiv Detail & Related papers (2022-06-11T04:05:29Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - Imbalanced Graph Classification via Graph-of-Graph Neural Networks [16.589373163769853]
Graph Neural Networks (GNNs) have achieved unprecedented success in learning graph representations to identify categorical labels of graphs.
We introduce a novel framework, Graph-of-Graph Neural Networks (G$2$GNN), which alleviates the graph imbalance issue by deriving extra supervision globally from neighboring graphs and locally from graphs themselves.
Our proposed G$2$GNN outperforms numerous baselines by roughly 5% in both F1-macro and F1-micro scores.
arXiv Detail & Related papers (2021-12-01T02:25:47Z) - Label Propagation across Graphs: Node Classification using Graph Neural
Tangent Kernels [12.445026956430826]
Graph neural networks (GNNs) have achieved superior performance on node classification tasks.
Our work considers a challenging inductive setting where a set of labeled graphs are available for training while the unlabeled target graph is completely separate.
Under the implicit assumption that the testing and training graphs come from similar distributions, our goal is to develop a labeling function that generalizes to unobserved connectivity structures.
arXiv Detail & Related papers (2021-10-07T19:42:35Z) - Edge but not Least: Cross-View Graph Pooling [76.71497833616024]
This paper presents a cross-view graph pooling (Co-Pooling) method to better exploit crucial graph structure information.
Through cross-view interaction, edge-view pooling and node-view pooling seamlessly reinforce each other to learn more informative graph-level representations.
arXiv Detail & Related papers (2021-09-24T08:01:23Z) - GraphHop: An Enhanced Label Propagation Method for Node Classification [34.073791157290614]
A scalable semi-supervised node classification method, called GraphHop, is proposed in this work.
Experimental results show that GraphHop outperforms state-of-the-art graph learning methods on a wide range of tasks.
arXiv Detail & Related papers (2021-01-07T02:10:20Z) - Inverse Graph Identification: Can We Identify Node Labels Given Graph
Labels? [89.13567439679709]
Graph Identification (GI) has long been researched in graph learning and is essential in certain applications.
This paper defines a novel problem dubbed Inverse Graph Identification (IGI)
We propose a simple yet effective method that makes the node-level message passing process using Graph Attention Network (GAT) under the protocol of GI.
arXiv Detail & Related papers (2020-07-12T12:06:17Z) - Multilevel Graph Matching Networks for Deep Graph Similarity Learning [79.3213351477689]
We propose a multi-level graph matching network (MGMN) framework for computing the graph similarity between any pair of graph-structured objects.
To compensate for the lack of standard benchmark datasets, we have created and collected a set of datasets for both the graph-graph classification and graph-graph regression tasks.
Comprehensive experiments demonstrate that MGMN consistently outperforms state-of-the-art baseline models on both the graph-graph classification and graph-graph regression tasks.
arXiv Detail & Related papers (2020-07-08T19:48:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.