Simple and Effective Graph Autoencoders with One-Hop Linear Models
- URL: http://arxiv.org/abs/2001.07614v3
- Date: Wed, 17 Jun 2020 09:54:33 GMT
- Title: Simple and Effective Graph Autoencoders with One-Hop Linear Models
- Authors: Guillaume Salha, Romain Hennequin, Michalis Vazirgiannis
- Abstract summary: We show that graph convolutional networks (GCN) encoders are unnecessarily complex for many applications.
We propose to replace them by significantly simpler and more interpretable linear models w.r.t. the direct neighborhood (one-hop) adjacency matrix of the graph.
- Score: 25.37082257457257
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Over the last few years, graph autoencoders (AE) and variational autoencoders
(VAE) emerged as powerful node embedding methods, with promising performances
on challenging tasks such as link prediction and node clustering. Graph AE, VAE
and most of their extensions rely on multi-layer graph convolutional networks
(GCN) encoders to learn vector space representations of nodes. In this paper,
we show that GCN encoders are actually unnecessarily complex for many
applications. We propose to replace them by significantly simpler and more
interpretable linear models w.r.t. the direct neighborhood (one-hop) adjacency
matrix of the graph, involving fewer operations, fewer parameters and no
activation function. For the two aforementioned tasks, we show that this
simpler approach consistently reaches competitive performances w.r.t. GCN-based
graph AE and VAE for numerous real-world graphs, including all benchmark
datasets commonly used to evaluate graph AE and VAE. Based on these results, we
also question the relevance of repeatedly using these datasets to compare
complex graph AE and VAE.
Related papers
- Dynamic Graph Message Passing Networks for Visual Recognition [112.49513303433606]
Modelling long-range dependencies is critical for scene understanding tasks in computer vision.
A fully-connected graph is beneficial for such modelling, but its computational overhead is prohibitive.
We propose a dynamic graph message passing network, that significantly reduces the computational complexity.
arXiv Detail & Related papers (2022-09-20T14:41:37Z) - MGAE: Masked Autoencoders for Self-Supervised Learning on Graphs [55.66953093401889]
Masked graph autoencoder (MGAE) framework to perform effective learning on graph structure data.
Taking insights from self-supervised learning, we randomly mask a large proportion of edges and try to reconstruct these missing edges during training.
arXiv Detail & Related papers (2022-01-07T16:48:07Z) - Node Feature Extraction by Self-Supervised Multi-scale Neighborhood
Prediction [123.20238648121445]
We propose a new self-supervised learning framework, Graph Information Aided Node feature exTraction (GIANT)
GIANT makes use of the eXtreme Multi-label Classification (XMC) formalism, which is crucial for fine-tuning the language model based on graph information.
We demonstrate the superior performance of GIANT over the standard GNN pipeline on Open Graph Benchmark datasets.
arXiv Detail & Related papers (2021-10-29T19:55:12Z) - Graph Contrastive Learning Automated [94.41860307845812]
Graph contrastive learning (GraphCL) has emerged with promising representation learning performance.
The effectiveness of GraphCL hinges on ad-hoc data augmentations, which have to be manually picked per dataset.
This paper proposes a unified bi-level optimization framework to automatically, adaptively and dynamically select data augmentations when performing GraphCL on specific graph data.
arXiv Detail & Related papers (2021-06-10T16:35:27Z) - Scalable Graph Neural Networks for Heterogeneous Graphs [12.44278942365518]
Graph neural networks (GNNs) are a popular class of parametric model for learning over graph-structured data.
Recent work has argued that GNNs primarily use the graph for feature smoothing, and have shown competitive results on benchmark tasks.
In this work, we ask whether these results can be extended to heterogeneous graphs, which encode multiple types of relationship between different entities.
arXiv Detail & Related papers (2020-11-19T06:03:35Z) - Dirichlet Graph Variational Autoencoder [65.94744123832338]
We present Dirichlet Graph Variational Autoencoder (DGVAE) with graph cluster memberships as latent factors.
Motivated by the low pass characteristics in balanced graph cut, we propose a new variant of GNN named Heatts to encode the input graph into cluster memberships.
arXiv Detail & Related papers (2020-10-09T07:35:26Z) - Self-Constructing Graph Convolutional Networks for Semantic Labeling [23.623276007011373]
We propose a novel architecture called the Self-Constructing Graph (SCG), which makes use of learnable latent variables to generate embeddings.
SCG can automatically obtain optimized non-local context graphs from complex-shaped objects in aerial imagery.
We demonstrate the effectiveness and flexibility of the proposed SCG on the publicly available ISPRS Vaihingen dataset.
arXiv Detail & Related papers (2020-03-15T21:55:24Z) - Graph Deconvolutional Generation [3.5138314002170192]
We focus on the modern equivalent of the Erdos-Renyi random graph model: the graph variational autoencoder (GVAE)
GVAE has difficulty matching the training distribution and relies on an expensive graph matching procedure.
We improve this class of models by building a message passing neural network into GVAE's encoder and decoder.
arXiv Detail & Related papers (2020-02-14T04:37:14Z) - FastGAE: Scalable Graph Autoencoders with Stochastic Subgraph Decoding [22.114681053198453]
Graph autoencoders (AE) and variational autoencoders (VAE) are powerful node embedding methods, but suffer from scalability issues.
FastGAE is a general framework to scale graph AE and VAE to large graphs with millions of nodes and edges.
arXiv Detail & Related papers (2020-02-05T18:27:39Z) - Revisiting Graph based Collaborative Filtering: A Linear Residual Graph
Convolutional Network Approach [55.44107800525776]
Graph Convolutional Networks (GCNs) are state-of-the-art graph based representation learning models.
In this paper, we revisit GCN based Collaborative Filtering (CF) based Recommender Systems (RS)
We show that removing non-linearities would enhance recommendation performance, consistent with the theories in simple graph convolutional networks.
We propose a residual network structure that is specifically designed for CF with user-item interaction modeling.
arXiv Detail & Related papers (2020-01-28T04:41:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.