Scalable Deep Generative Modeling for Sparse Graphs
- URL: http://arxiv.org/abs/2006.15502v1
- Date: Sun, 28 Jun 2020 04:37:57 GMT
- Title: Scalable Deep Generative Modeling for Sparse Graphs
- Authors: Hanjun Dai, Azade Nazi, Yujia Li, Bo Dai, Dale Schuurmans
- Abstract summary: Existing deep neural methods require $Omega(n2)$ complexity by building up the adjacency matrix.
We develop a novel autoregressive model, named BiGG, that utilizes this sparsity to avoid generating the full adjacency matrix.
During training this autoregressive model can be parallelized with $O(log n)$ synchronization stages.
- Score: 105.60961114312686
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning graph generative models is a challenging task for deep learning and
has wide applicability to a range of domains like chemistry, biology and social
science. However current deep neural methods suffer from limited scalability:
for a graph with $n$ nodes and $m$ edges, existing deep neural methods require
$\Omega(n^2)$ complexity by building up the adjacency matrix. On the other
hand, many real world graphs are actually sparse in the sense that $m\ll n^2$.
Based on this, we develop a novel autoregressive model, named BiGG, that
utilizes this sparsity to avoid generating the full adjacency matrix, and
importantly reduces the graph generation time complexity to $O((n + m)\log n)$.
Furthermore, during training this autoregressive model can be parallelized with
$O(\log n)$ synchronization stages, which makes it much more efficient than
other autoregressive models that require $\Omega(n)$. Experiments on several
benchmarks show that the proposed approach not only scales to orders of
magnitude larger graphs than previously possible with deep autoregressive graph
generative models, but also yields better graph generation quality.
Related papers
- CliquePH: Higher-Order Information for Graph Neural Networks through Persistent Homology on Clique Graphs [15.044471983688249]
We introduce a novel method that extracts information about higher-order structures in the graph.
Our method can lead to up to $31%$ improvements in test accuracy.
arXiv Detail & Related papers (2024-09-12T16:56:26Z) - Two Heads Are Better Than One: Boosting Graph Sparse Training via
Semantic and Topological Awareness [80.87683145376305]
Graph Neural Networks (GNNs) excel in various graph learning tasks but face computational challenges when applied to large-scale graphs.
We propose Graph Sparse Training ( GST), which dynamically manipulates sparsity at the data level.
GST produces a sparse graph with maximum topological integrity and no performance degradation.
arXiv Detail & Related papers (2024-02-02T09:10:35Z) - Improved High-Probability Regret for Adversarial Bandits with
Time-Varying Feedback Graphs [62.52390282012508]
We study high-probability regret bounds for adversarial $K$-armed bandits with time-varying feedback graphs over $T$ rounds.
We develop an algorithm that achieves the optimal regret $widetildemathcalO((sum_t=1Talpha_t)1/2+max_tin[T]alpha_t]$ with high probability.
We also develop the first algorithm that achieves the optimal high-probability regret bound for weakly observable graphs.
arXiv Detail & Related papers (2022-10-04T04:36:15Z) - DOTIN: Dropping Task-Irrelevant Nodes for GNNs [119.17997089267124]
Recent graph learning approaches have introduced the pooling strategy to reduce the size of graphs for learning.
We design a new approach called DOTIN (underlineDrunderlineopping underlineTask-underlineIrrelevant underlineNodes) to reduce the size of graphs.
Our method speeds up GAT by about 50% on graph-level tasks including graph classification and graph edit distance.
arXiv Detail & Related papers (2022-04-28T12:00:39Z) - AnchorGAE: General Data Clustering via $O(n)$ Bipartite Graph
Convolution [79.44066256794187]
We show how to convert a non-graph dataset into a graph by introducing the generative graph model, which is used to build graph convolution networks (GCNs)
A bipartite graph constructed by anchors is updated dynamically to exploit the high-level information behind data.
We theoretically prove that the simple update will lead to degeneration and a specific strategy is accordingly designed.
arXiv Detail & Related papers (2021-11-12T07:08:13Z) - Node Feature Extraction by Self-Supervised Multi-scale Neighborhood
Prediction [123.20238648121445]
We propose a new self-supervised learning framework, Graph Information Aided Node feature exTraction (GIANT)
GIANT makes use of the eXtreme Multi-label Classification (XMC) formalism, which is crucial for fine-tuning the language model based on graph information.
We demonstrate the superior performance of GIANT over the standard GNN pipeline on Open Graph Benchmark datasets.
arXiv Detail & Related papers (2021-10-29T19:55:12Z) - Permute Me Softly: Learning Soft Permutations for Graph Representations [26.581982471005674]
Graph neural networks (GNNs) have emerged as a dominant paradigm for machine learning with graphs.
Research on MPNNs has mainly focused on the family of message passing neural networks (MPNNs)
arXiv Detail & Related papers (2021-10-05T08:20:51Z) - Order Matters: Probabilistic Modeling of Node Sequence for Graph
Generation [18.03898476141173]
A graph generative model defines a distribution over graphs.
We derive the exact joint probability over the graph and the node ordering of the sequential process.
We train graph generative models by maximizing this bound, without using the ad-hoc node orderings of previous methods.
arXiv Detail & Related papers (2021-06-11T06:37:52Z) - A step towards neural genome assembly [0.0]
We train the MPNN model with max-aggregator to execute several algorithms for graph simplification.
We show that the algorithms were learned successfully and can be scaled to graphs of sizes up to 20 times larger than the ones used in training.
arXiv Detail & Related papers (2020-11-10T10:12:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.