Neural Structured Prediction for Inductive Node Classification
- URL: http://arxiv.org/abs/2204.07524v1
- Date: Fri, 15 Apr 2022 15:50:27 GMT
- Title: Neural Structured Prediction for Inductive Node Classification
- Authors: Meng Qu, Huiyu Cai, Jian Tang
- Abstract summary: This paper studies node classification in the inductive setting, aiming to learn a model on labeled training graphs and generalize it to infer node labels on unlabeled test graphs.
We present a new approach called the Structured Proxy Network (SPN), which combines the advantages of both worlds.
- Score: 29.908759584092167
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper studies node classification in the inductive setting, i.e., aiming
to learn a model on labeled training graphs and generalize it to infer node
labels on unlabeled test graphs. This problem has been extensively studied with
graph neural networks (GNNs) by learning effective node representations, as
well as traditional structured prediction methods for modeling the structured
output of node labels, e.g., conditional random fields (CRFs). In this paper,
we present a new approach called the Structured Proxy Network (SPN), which
combines the advantages of both worlds. SPN defines flexible potential
functions of CRFs with GNNs. However, learning such a model is nontrivial as it
involves optimizing a maximin game with high-cost inference. Inspired by the
underlying connection between joint and marginal distributions defined by
Markov networks, we propose to solve an approximate version of the optimization
problem as a proxy, which yields a near-optimal solution, making learning more
efficient. Extensive experiments on two settings show that our approach
outperforms many competitive baselines.
Related papers
- Sparse Decomposition of Graph Neural Networks [20.768412002413843]
We propose an approach to reduce the number of nodes that are included during aggregation.
We achieve this through a sparse decomposition, learning to approximate node representations using a weighted sum of linearly transformed features.
We demonstrate via extensive experiments that our method outperforms other baselines designed for inference speedup.
arXiv Detail & Related papers (2024-10-25T17:52:16Z) - Large Scale Training of Graph Neural Networks for Optimal Markov-Chain Partitioning Using the Kemeny Constant [1.8606770727950463]
We propose several GNN-based architectures to tackle the graph partitioning problem for Markov Chains described as kinetic networks.
This approach aims to minimize how much a proposed partitioning changes the Kemeny constant.
We show how simple GraphSAGE-based GNNs with linear layers can outperform much larger and more expressive attention-based models in this context.
arXiv Detail & Related papers (2023-12-22T17:19:50Z) - A GAN Approach for Node Embedding in Heterogeneous Graphs Using Subgraph
Sampling [35.94125831564648]
Our research addresses class imbalance issues in heterogeneous graphs using graph neural networks (GNNs)
We propose a novel method combining the strengths of Generative Adversarial Networks (GANs) with GNNs, creating synthetic nodes and edges that effectively balance the dataset.
arXiv Detail & Related papers (2023-12-11T16:52:20Z) - How Expressive are Graph Neural Networks in Recommendation? [17.31401354442106]
Graph Neural Networks (GNNs) have demonstrated superior performance on various graph learning tasks, including recommendation.
Recent research has explored the expressiveness of GNNs in general, demonstrating that message passing GNNs are at most as powerful as the Weisfeiler-Lehman test.
We propose the topological closeness metric to evaluate GNNs' ability to capture the structural distance between nodes.
arXiv Detail & Related papers (2023-08-22T02:17:34Z) - Relation Embedding based Graph Neural Networks for Handling
Heterogeneous Graph [58.99478502486377]
We propose a simple yet efficient framework to make the homogeneous GNNs have adequate ability to handle heterogeneous graphs.
Specifically, we propose Relation Embedding based Graph Neural Networks (RE-GNNs), which employ only one parameter per relation to embed the importance of edge type relations and self-loop connections.
arXiv Detail & Related papers (2022-09-23T05:24:18Z) - Deep Manifold Learning with Graph Mining [80.84145791017968]
We propose a novel graph deep model with a non-gradient decision layer for graph mining.
The proposed model has achieved state-of-the-art performance compared to the current models.
arXiv Detail & Related papers (2022-07-18T04:34:08Z) - GPN: A Joint Structural Learning Framework for Graph Neural Networks [36.38529113603987]
We propose a GNN-based joint learning framework that simultaneously learns the graph structure and the downstream task.
Our method is the first GNN-based bilevel optimization framework for resolving this task.
arXiv Detail & Related papers (2022-05-12T09:06:04Z) - Optimal Propagation for Graph Neural Networks [51.08426265813481]
We propose a bi-level optimization approach for learning the optimal graph structure.
We also explore a low-rank approximation model for further reducing the time complexity.
arXiv Detail & Related papers (2022-05-06T03:37:00Z) - Contrastive Adaptive Propagation Graph Neural Networks for Efficient
Graph Learning [65.08818785032719]
Graph Networks (GNNs) have achieved great success in processing graph data by extracting and propagating structure-aware features.
Recently the field has advanced from local propagation schemes that focus on local neighbors towards extended propagation schemes that can directly deal with extended neighbors consisting of both local and high-order neighbors.
Despite the impressive performance, existing approaches are still insufficient to build an efficient and learnable extended propagation scheme that can adaptively adjust the influence of local and high-order neighbors.
arXiv Detail & Related papers (2021-12-02T10:35:33Z) - Cyclic Label Propagation for Graph Semi-supervised Learning [52.102251202186025]
We introduce a novel framework for graph semi-supervised learning called CycProp.
CycProp integrates GNNs into the process of label propagation in a cyclic and mutually reinforcing manner.
In particular, our proposed CycProp updates the node embeddings learned by GNN module with the augmented information by label propagation.
arXiv Detail & Related papers (2020-11-24T02:55:40Z) - Binarized Graph Neural Network [65.20589262811677]
We develop a binarized graph neural network to learn the binary representations of the nodes with binary network parameters.
Our proposed method can be seamlessly integrated into the existing GNN-based embedding approaches.
Experiments indicate that the proposed binarized graph neural network, namely BGN, is orders of magnitude more efficient in terms of both time and space.
arXiv Detail & Related papers (2020-04-19T09:43:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.