FastGAE: Scalable Graph Autoencoders with Stochastic Subgraph Decoding
- URL: http://arxiv.org/abs/2002.01910v5
- Date: Tue, 13 Apr 2021 15:37:01 GMT
- Title: FastGAE: Scalable Graph Autoencoders with Stochastic Subgraph Decoding
- Authors: Guillaume Salha and Romain Hennequin and Jean-Baptiste Remy and Manuel
Moussallam and Michalis Vazirgiannis
- Abstract summary: Graph autoencoders (AE) and variational autoencoders (VAE) are powerful node embedding methods, but suffer from scalability issues.
FastGAE is a general framework to scale graph AE and VAE to large graphs with millions of nodes and edges.
- Score: 22.114681053198453
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph autoencoders (AE) and variational autoencoders (VAE) are powerful node
embedding methods, but suffer from scalability issues. In this paper, we
introduce FastGAE, a general framework to scale graph AE and VAE to large
graphs with millions of nodes and edges. Our strategy, based on an effective
stochastic subgraph decoding scheme, significantly speeds up the training of
graph AE and VAE while preserving or even improving performances. We
demonstrate the effectiveness of FastGAE on various real-world graphs,
outperforming the few existing approaches to scale graph AE and VAE by a wide
margin.
Related papers
- GraphCroc: Cross-Correlation Autoencoder for Graph Structural Reconstruction [6.817416560637197]
Graph autoencoders (GAEs) reconstruct graph structures from node embeddings.
We introduce a cross-correlation mechanism that significantly enhances the GAE representational capabilities.
We also propose GraphCroc, a new GAE that supports flexible encoder architectures tailored for various downstream tasks.
arXiv Detail & Related papers (2024-10-04T12:59:45Z) - A Scalable and Effective Alternative to Graph Transformers [19.018320937729264]
Graph Transformers (GTs) were introduced, utilizing self-attention mechanism to model pairwise node relationships.
GTs suffer from complexity w.r.t. the number of nodes in the graph, hindering their applicability to large graphs.
We present Graph-Enhanced Contextual Operator (GECO), a scalable and effective alternative to GTs.
arXiv Detail & Related papers (2024-06-17T19:57:34Z) - GraphMAE: Self-Supervised Masked Graph Autoencoders [52.06140191214428]
We present a masked graph autoencoder GraphMAE that mitigates issues for generative self-supervised graph learning.
We conduct extensive experiments on 21 public datasets for three different graph learning tasks.
The results manifest that GraphMAE--a simple graph autoencoder with our careful designs--can consistently generate outperformance over both contrastive and generative state-of-the-art baselines.
arXiv Detail & Related papers (2022-05-22T11:57:08Z) - GraphCoCo: Graph Complementary Contrastive Learning [65.89743197355722]
Graph Contrastive Learning (GCL) has shown promising performance in graph representation learning (GRL) without the supervision of manual annotations.
This paper proposes an effective graph complementary contrastive learning approach named GraphCoCo to tackle the above issue.
arXiv Detail & Related papers (2022-03-24T02:58:36Z) - Efficient Dynamic Graph Representation Learning at Scale [66.62859857734104]
We propose Efficient Dynamic Graph lEarning (EDGE), which selectively expresses certain temporal dependency via training loss to improve the parallelism in computations.
We show that EDGE can scale to dynamic graphs with millions of nodes and hundreds of millions of temporal events and achieve new state-of-the-art (SOTA) performance.
arXiv Detail & Related papers (2021-12-14T22:24:53Z) - Evolving-Graph Gaussian Processes [20.065168755580558]
Existing approaches have focused on static structures, whereas many real graph data represent a dynamic structure, limiting the applications of GGPs.
We propose evolving-Graph Gaussian Processes (e-GGPs) to overcome this.
We demonstrate the benefits of e-GGPs over static graph Gaussian Process approaches.
arXiv Detail & Related papers (2021-06-29T07:16:04Z) - GNNAutoScale: Scalable and Expressive Graph Neural Networks via
Historical Embeddings [51.82434518719011]
GNNAutoScale (GAS) is a framework for scaling arbitrary message-passing GNNs to large graphs.
Gas prunes entire sub-trees of the computation graph by utilizing historical embeddings from prior training iterations.
Gas reaches state-of-the-art performance on large-scale graphs.
arXiv Detail & Related papers (2021-06-10T09:26:56Z) - Robust Optimization as Data Augmentation for Large-scale Graphs [117.2376815614148]
We propose FLAG (Free Large-scale Adversarial Augmentation on Graphs), which iteratively augments node features with gradient-based adversarial perturbations during training.
FLAG is a general-purpose approach for graph data, which universally works in node classification, link prediction, and graph classification tasks.
arXiv Detail & Related papers (2020-10-19T21:51:47Z) - Dirichlet Graph Variational Autoencoder [65.94744123832338]
We present Dirichlet Graph Variational Autoencoder (DGVAE) with graph cluster memberships as latent factors.
Motivated by the low pass characteristics in balanced graph cut, we propose a new variant of GNN named Heatts to encode the input graph into cluster memberships.
arXiv Detail & Related papers (2020-10-09T07:35:26Z) - Simple and Effective Graph Autoencoders with One-Hop Linear Models [25.37082257457257]
We show that graph convolutional networks (GCN) encoders are unnecessarily complex for many applications.
We propose to replace them by significantly simpler and more interpretable linear models w.r.t. the direct neighborhood (one-hop) adjacency matrix of the graph.
arXiv Detail & Related papers (2020-01-21T15:33:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.