The quest for the GRAph Level autoEncoder (GRALE)
- URL: http://arxiv.org/abs/2505.22109v1
- Date: Wed, 28 May 2025 08:37:33 GMT
- Title: The quest for the GRAph Level autoEncoder (GRALE)
- Authors: Paul Krzakala, Gabriel Melo, Charlotte Laclau, Florence d'Alché-Buc, Rémi Flamary,
- Abstract summary: We introduce GRALE, a novel graph autoencoder that encodes and decodes graphs of varying sizes into a shared embedding space.<n>We show, in numerical experiments on simulated and molecular data, that GRALE enables a highly general form of pre-training, applicable to a wide range of downstream tasks.
- Score: 14.343226956164782
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Although graph-based learning has attracted a lot of attention, graph representation learning is still a challenging task whose resolution may impact key application fields such as chemistry or biology. To this end, we introduce GRALE, a novel graph autoencoder that encodes and decodes graphs of varying sizes into a shared embedding space. GRALE is trained using an Optimal Transport-inspired loss that compares the original and reconstructed graphs and leverages a differentiable node matching module, which is trained jointly with the encoder and decoder. The proposed attention-based architecture relies on Evoformer, the core component of AlphaFold, which we extend to support both graph encoding and decoding. We show, in numerical experiments on simulated and molecular data, that GRALE enables a highly general form of pre-training, applicable to a wide range of downstream tasks, from classification and regression to more complex tasks such as graph interpolation, editing, matching, and prediction.
Related papers
- Hierarchical Vector Quantized Graph Autoencoder with Annealing-Based Code Selection [13.731120424653705]
The Vector Quantized Variational Autoencoder (VQ-VAE) is a powerful autoencoder extensively used in fields such as computer vision.<n>In this paper, we provide an empirical analysis of vector quantization in the context of graph autoencoders.<n>We identify two key challenges associated with vector quantization when applying in graph data: codebook underutilization and codebook space sparsity.
arXiv Detail & Related papers (2025-04-17T07:43:52Z) - Auto-encoding Molecules: Graph-Matching Capabilities Matter [0.0]
We show the effect of graph matching precision on the training behavior and generation capabilities of a Variational Autoencoder (VAE)<n>We propose a transformer-based message passing graph decoder as an alternative to a graph neural network decoder.<n>We show that the precision of graph matching has significant impact on training behavior and is essential for effective de novo (molecular) graph generation.
arXiv Detail & Related papers (2025-03-01T10:00:37Z) - What Improves the Generalization of Graph Transformers? A Theoretical Dive into the Self-attention and Positional Encoding [67.59552859593985]
Graph Transformers, which incorporate self-attention and positional encoding, have emerged as a powerful architecture for various graph learning tasks.
This paper introduces first theoretical investigation of a shallow Graph Transformer for semi-supervised classification.
arXiv Detail & Related papers (2024-06-04T05:30:16Z) - Learning Network Representations with Disentangled Graph Auto-Encoder [1.671868610555776]
We introduce the Disentangled Graph Auto-Encoder (DGA) and the Disentangled Variational Graph Auto-Encoder (DVGA)
DGA is a convolutional network with multi-channel message-passing layers to serve as the encoder.
DVGA is a factor-wise decoder that takes into account the characteristics of disentangled representations.
arXiv Detail & Related papers (2024-02-02T04:52:52Z) - GiGaMAE: Generalizable Graph Masked Autoencoder via Collaborative Latent
Space Reconstruction [76.35904458027694]
Masked autoencoder models lack good generalization ability on graph data.
We propose a novel graph masked autoencoder framework called GiGaMAE.
Our results will shed light on the design of foundation models on graph-structured data.
arXiv Detail & Related papers (2023-08-18T16:30:51Z) - GraphMAE: Self-Supervised Masked Graph Autoencoders [52.06140191214428]
We present a masked graph autoencoder GraphMAE that mitigates issues for generative self-supervised graph learning.
We conduct extensive experiments on 21 public datasets for three different graph learning tasks.
The results manifest that GraphMAE--a simple graph autoencoder with our careful designs--can consistently generate outperformance over both contrastive and generative state-of-the-art baselines.
arXiv Detail & Related papers (2022-05-22T11:57:08Z) - GraphCoCo: Graph Complementary Contrastive Learning [65.89743197355722]
Graph Contrastive Learning (GCL) has shown promising performance in graph representation learning (GRL) without the supervision of manual annotations.
This paper proposes an effective graph complementary contrastive learning approach named GraphCoCo to tackle the above issue.
arXiv Detail & Related papers (2022-03-24T02:58:36Z) - Multi-Level Graph Contrastive Learning [38.022118893733804]
We propose a Multi-Level Graph Contrastive Learning (MLGCL) framework for learning robust representation of graph data by contrasting space views of graphs.
The original graph is first-order approximation structure and contains uncertainty or error, while the $k$NN graph generated by encoding features preserves high-order proximity.
Extensive experiments indicate MLGCL achieves promising results compared with the existing state-of-the-art graph representation learning methods on seven datasets.
arXiv Detail & Related papers (2021-07-06T14:24:43Z) - Learning Graphon Autoencoders for Generative Graph Modeling [91.32624399902755]
Graphon is a nonparametric model that generates graphs with arbitrary sizes and can be induced from graphs easily.
We propose a novel framework called textitgraphon autoencoder to build an interpretable and scalable graph generative model.
A linear graphon factorization model works as a decoder, leveraging the latent representations to reconstruct the induced graphons.
arXiv Detail & Related papers (2021-05-29T08:11:40Z) - Graph-Aware Transformer: Is Attention All Graphs Need? [5.240000443825077]
GRaph-Aware Transformer (GRAT) is first Transformer-based model which can encode and decode whole graphs in end-to-end fashion.
GRAT has shown very promising results including state-of-the-art performance on 4 regression tasks in QM9 benchmark.
arXiv Detail & Related papers (2020-06-09T12:13:56Z) - Tensor Graph Convolutional Networks for Multi-relational and Robust
Learning [74.05478502080658]
This paper introduces a tensor-graph convolutional network (TGCN) for scalable semi-supervised learning (SSL) from data associated with a collection of graphs, that are represented by a tensor.
The proposed architecture achieves markedly improved performance relative to standard GCNs, copes with state-of-the-art adversarial attacks, and leads to remarkable SSL performance over protein-to-protein interaction networks.
arXiv Detail & Related papers (2020-03-15T02:33:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.