Graph Context Encoder: Graph Feature Inpainting for Graph Generation and
Self-supervised Pretraining
- URL: http://arxiv.org/abs/2106.10124v1
- Date: Fri, 18 Jun 2021 13:28:11 GMT
- Title: Graph Context Encoder: Graph Feature Inpainting for Graph Generation and
Self-supervised Pretraining
- Authors: Oriel Frigo, R\'emy Brossard, David Dehaene
- Abstract summary: Graph Context (GCE) is a simple but efficient approach for graph representation learning based on graph feature masking and reconstruction.
GCE models are trained to efficiently reconstruct input graphs similarly to a graph autoencoder where node and edge labels are masked.
We show that GCE can be used for novel graph generation, with applications for molecule generation.
- Score: 4.640835690336652
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We propose the Graph Context Encoder (GCE), a simple but efficient approach
for graph representation learning based on graph feature masking and
reconstruction.
GCE models are trained to efficiently reconstruct input graphs similarly to a
graph autoencoder where node and edge labels are masked. In particular, our
model is also allowed to change graph structures by masking and reconstructing
graphs augmented by random pseudo-edges.
We show that GCE can be used for novel graph generation, with applications
for molecule generation. Used as a pretraining method, we also show that GCE
improves baseline performances in supervised classification tasks tested on
multiple standard benchmark graph datasets.
Related papers
- Graph Attention with Random Rewiring [12.409982249220812]
This paper introduces Graph-Rewiring Attention with Structures (GRASS), a novel GNN architecture that combines the advantages of three paradigms.
GRASS rewires the input graph by superimposing a random regular graph, enhancing long-range information propagation.
It also employs a unique additive attention mechanism tailored for graph-structured data, providing a graph inductive bias while remaining computationally efficient.
arXiv Detail & Related papers (2024-07-08T06:21:56Z) - An Accurate Graph Generative Model with Tunable Features [0.8192907805418583]
We propose a method to improve the accuracy of GraphTune by adding a new mechanism to feed back errors of graph features.
Experiments on a real-world graph dataset showed that the features in the generated graphs are accurately tuned compared with conventional models.
arXiv Detail & Related papers (2023-09-03T12:34:15Z) - Similarity-aware Positive Instance Sampling for Graph Contrastive
Pre-training [82.68805025636165]
We propose to select positive graph instances directly from existing graphs in the training set.
Our selection is based on certain domain-specific pair-wise similarity measurements.
Besides, we develop an adaptive node-level pre-training method to dynamically mask nodes to distribute them evenly in the graph.
arXiv Detail & Related papers (2022-06-23T20:12:51Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - Learning Graphon Autoencoders for Generative Graph Modeling [91.32624399902755]
Graphon is a nonparametric model that generates graphs with arbitrary sizes and can be induced from graphs easily.
We propose a novel framework called textitgraphon autoencoder to build an interpretable and scalable graph generative model.
A linear graphon factorization model works as a decoder, leveraging the latent representations to reconstruct the induced graphons.
arXiv Detail & Related papers (2021-05-29T08:11:40Z) - Dirichlet Graph Variational Autoencoder [65.94744123832338]
We present Dirichlet Graph Variational Autoencoder (DGVAE) with graph cluster memberships as latent factors.
Motivated by the low pass characteristics in balanced graph cut, we propose a new variant of GNN named Heatts to encode the input graph into cluster memberships.
arXiv Detail & Related papers (2020-10-09T07:35:26Z) - GraphCrop: Subgraph Cropping for Graph Classification [36.33477716380905]
We develop the textbfGraphCrop (Subgraph Cropping) data augmentation method to simulate the real-world noise of sub-structure omission.
By preserving the valid structure contexts for graph classification, we encourage GNNs to understand the content of graph structures in a global sense.
arXiv Detail & Related papers (2020-09-22T14:05:41Z) - Heuristic Semi-Supervised Learning for Graph Generation Inspired by
Electoral College [80.67842220664231]
We propose a novel pre-processing technique, namely ELectoral COllege (ELCO), which automatically expands new nodes and edges to refine the label similarity within a dense subgraph.
In all setups tested, our method boosts the average score of base models by a large margin of 4.7 points, as well as consistently outperforms the state-of-the-art.
arXiv Detail & Related papers (2020-06-10T14:48:48Z) - Self-Constructing Graph Convolutional Networks for Semantic Labeling [23.623276007011373]
We propose a novel architecture called the Self-Constructing Graph (SCG), which makes use of learnable latent variables to generate embeddings.
SCG can automatically obtain optimized non-local context graphs from complex-shaped objects in aerial imagery.
We demonstrate the effectiveness and flexibility of the proposed SCG on the publicly available ISPRS Vaihingen dataset.
arXiv Detail & Related papers (2020-03-15T21:55:24Z) - Unsupervised Graph Embedding via Adaptive Graph Learning [85.28555417981063]
Graph autoencoders (GAEs) are powerful tools in representation learning for graph embedding.
In this paper, two novel unsupervised graph embedding methods, unsupervised graph embedding via adaptive graph learning (BAGE) and unsupervised graph embedding via variational adaptive graph learning (VBAGE) are proposed.
Experimental studies on several datasets validate our design and demonstrate that our methods outperform baselines by a wide margin in node clustering, node classification, and graph visualization tasks.
arXiv Detail & Related papers (2020-03-10T02:33:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.