Graph Ordering: Towards the Optimal by Learning
- URL: http://arxiv.org/abs/2001.06631v1
- Date: Sat, 18 Jan 2020 09:14:16 GMT
- Title: Graph Ordering: Towards the Optimal by Learning
- Authors: Kangfei Zhao, Yu Rong, Jeffrey Xu Yu, Junzhou Huang, Hao Zhang
- Abstract summary: Graph representation learning has achieved a remarkable success in many graph-based applications, such as node classification, prediction, and community detection.
However, for some kind of graph applications, such as graph compression and edge partition, it is very hard to reduce them to some graph representation learning tasks.
In this paper, we propose to attack the graph ordering problem behind such applications by a novel learning approach.
- Score: 69.72656588714155
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph representation learning has achieved a remarkable success in many
graph-based applications, such as node classification, link prediction, and
community detection. These models are usually designed to preserve the vertex
information at different granularity and reduce the problems in discrete space
to some machine learning tasks in continuous space. However, regardless of the
fruitful progress, for some kind of graph applications, such as graph
compression and edge partition, it is very hard to reduce them to some graph
representation learning tasks. Moreover, these problems are closely related to
reformulating a global layout for a specific graph, which is an important
NP-hard combinatorial optimization problem: graph ordering. In this paper, we
propose to attack the graph ordering problem behind such applications by a
novel learning approach. Distinguished from greedy algorithms based on
predefined heuristics, we propose a neural network model: Deep Order Network
(DON) to capture the hidden locality structure from partial vertex order sets.
Supervised by sampled partial order, DON has the ability to infer unseen
combinations. Furthermore, to alleviate the combinatorial explosion in the
training space of DON and make the efficient partial vertex order sampling , we
employ a reinforcement learning model: the Policy Network, to adjust the
partial order sampling probabilities during the training phase of DON
automatically. To this end, the Policy Network can improve the training
efficiency and guide DON to evolve towards a more effective model
automatically. Comprehensive experiments on both synthetic and real data
validate that DON-RL outperforms the current state-of-the-art heuristic
algorithm consistently. Two case studies on graph compression and edge
partitioning demonstrate the potential power of DON-RL in real applications.
Related papers
- Do We Really Need Graph Convolution During Training? Light Post-Training Graph-ODE for Efficient Recommendation [34.93725892725111]
Graph convolution networks (GCNs) in training recommender systems (RecSys) have been persistent concerns.
This paper presents a critical examination of the necessity of graph convolutions during the training phase.
We introduce an innovative alternative: the Light Post-Training Graph Ordinary-Differential-Equation (LightGODE)
arXiv Detail & Related papers (2024-07-26T17:59:32Z) - Deep Manifold Graph Auto-Encoder for Attributed Graph Embedding [51.75091298017941]
This paper proposes a novel Deep Manifold (Variational) Graph Auto-Encoder (DMVGAE/DMGAE) for attributed graph data.
The proposed method surpasses state-of-the-art baseline algorithms by a significant margin on different downstream tasks across popular datasets.
arXiv Detail & Related papers (2024-01-12T17:57:07Z) - A Topology-aware Graph Coarsening Framework for Continual Graph Learning [8.136809136959302]
Continual learning on graphs tackles the problem of training a graph neural network (GNN) where graph data arrive in a streaming fashion.
Traditional continual learning strategies such as Experience Replay can be adapted to streaming graphs.
We propose TA$mathbbCO$, a (t)opology-(a)ware graph (co)arsening and (co)ntinual learning framework.
arXiv Detail & Related papers (2024-01-05T22:22:13Z) - HetGPT: Harnessing the Power of Prompt Tuning in Pre-Trained
Heterogeneous Graph Neural Networks [24.435068514392487]
HetGPT is a post-training prompting framework for graph neural networks.
It improves the performance of state-of-the-art HGNNs on semi-supervised node classification.
arXiv Detail & Related papers (2023-10-23T19:35:57Z) - Comprehensive Graph Gradual Pruning for Sparse Training in Graph Neural
Networks [52.566735716983956]
We propose a graph gradual pruning framework termed CGP to dynamically prune GNNs.
Unlike LTH-based methods, the proposed CGP approach requires no re-training, which significantly reduces the computation costs.
Our proposed strategy greatly improves both training and inference efficiency while matching or even exceeding the accuracy of existing methods.
arXiv Detail & Related papers (2022-07-18T14:23:31Z) - Learning Graph Structure from Convolutional Mixtures [119.45320143101381]
We propose a graph convolutional relationship between the observed and latent graphs, and formulate the graph learning task as a network inverse (deconvolution) problem.
In lieu of eigendecomposition-based spectral methods, we unroll and truncate proximal gradient iterations to arrive at a parameterized neural network architecture that we call a Graph Deconvolution Network (GDN)
GDNs can learn a distribution of graphs in a supervised fashion, perform link prediction or edge-weight regression tasks by adapting the loss function, and they are inherently inductive.
arXiv Detail & Related papers (2022-05-19T14:08:15Z) - Optimal Propagation for Graph Neural Networks [51.08426265813481]
We propose a bi-level optimization approach for learning the optimal graph structure.
We also explore a low-rank approximation model for further reducing the time complexity.
arXiv Detail & Related papers (2022-05-06T03:37:00Z) - Towards Unsupervised Deep Graph Structure Learning [67.58720734177325]
We propose an unsupervised graph structure learning paradigm, where the learned graph topology is optimized by data itself without any external guidance.
Specifically, we generate a learning target from the original data as an "anchor graph", and use a contrastive loss to maximize the agreement between the anchor graph and the learned graph.
arXiv Detail & Related papers (2022-01-17T11:57:29Z) - GIST: Distributed Training for Large-Scale Graph Convolutional Networks [18.964079367668262]
GIST is a hybrid layer and graph sampling method, which disjointly partitions the global model into several, smaller sub-GCNs.
This distributed framework improves model performance and significantly decreases wall-clock training time.
GIST seeks to enable large-scale GCN experimentation with the goal of bridging the existing gap in scale between graph machine learning and deep learning.
arXiv Detail & Related papers (2021-02-20T19:25:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.