Recovering Barab\'asi-Albert Parameters of Graphs through
Disentanglement
- URL: http://arxiv.org/abs/2105.00997v2
- Date: Tue, 4 May 2021 10:40:07 GMT
- Title: Recovering Barab\'asi-Albert Parameters of Graphs through
Disentanglement
- Authors: Cristina Guzman, Daphna Keidar, Tristan Meynier, Andreas Opedal,
Niklas Stoehr
- Abstract summary: Graph modeling approaches such as ErdHos R'enyi (ER) random graphs or Barab'asi-Albert (BA) graphs aim to reproduce properties of real-world graphs in an interpretable way.
Previous work by Stoehr et al. addresses these issues by learning the generation process from graph data.
We focus on recovering the generative parameters of BA graphs by replacing their $beta$-VAE decoder with a sequential one.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Classical graph modeling approaches such as Erd\H{o}s R\'{e}nyi (ER) random
graphs or Barab\'asi-Albert (BA) graphs, here referred to as stylized models,
aim to reproduce properties of real-world graphs in an interpretable way. While
useful, graph generation with stylized models requires domain knowledge and
iterative trial and error simulation. Previous work by Stoehr et al. (2019)
addresses these issues by learning the generation process from graph data,
using a disentanglement-focused deep autoencoding framework, more specifically,
a $\beta$-Variational Autoencoder ($\beta$-VAE). While they successfully
recover the generative parameters of ER graphs through the model's latent
variables, their model performs badly on sequentially generated graphs such as
BA graphs, due to their oversimplified decoder. We focus on recovering the
generative parameters of BA graphs by replacing their $\beta$-VAE decoder with
a sequential one. We first learn the generative BA parameters in a supervised
fashion using a Graph Neural Network (GNN) and a Random Forest Regressor, by
minimizing the squared loss between the true generative parameters and the
latent variables. Next, we train a $\beta$-VAE model, combining the GNN encoder
from the first stage with an LSTM-based decoder with a customized loss.
Related papers
- GraphGPT: Graph Learning with Generative Pre-trained Transformers [9.862004020075126]
We introduce textitGraphGPT, a novel model for Graph learning by self-supervised Generative Pre-training Transformers.
Our model transforms each graph or sampled subgraph into a sequence of tokens representing the node, edge and attributes.
The generative pre-training enables us to train GraphGPT up to 400M+ parameters with consistently increasing performance.
arXiv Detail & Related papers (2023-12-31T16:19:30Z) - Deep Prompt Tuning for Graph Transformers [55.2480439325792]
Fine-tuning is resource-intensive and requires storing multiple copies of large models.
We propose a novel approach called deep graph prompt tuning as an alternative to fine-tuning.
By freezing the pre-trained parameters and only updating the added tokens, our approach reduces the number of free parameters and eliminates the need for multiple model copies.
arXiv Detail & Related papers (2023-09-18T20:12:17Z) - Model Inversion Attacks against Graph Neural Networks [65.35955643325038]
We study model inversion attacks against Graph Neural Networks (GNNs)
In this paper, we present GraphMI to infer the private training graph data.
Our experimental results show that such defenses are not sufficiently effective and call for more advanced defenses against privacy attacks.
arXiv Detail & Related papers (2022-09-16T09:13:43Z) - Learning Graphon Autoencoders for Generative Graph Modeling [91.32624399902755]
Graphon is a nonparametric model that generates graphs with arbitrary sizes and can be induced from graphs easily.
We propose a novel framework called textitgraphon autoencoder to build an interpretable and scalable graph generative model.
A linear graphon factorization model works as a decoder, leveraging the latent representations to reconstruct the induced graphons.
arXiv Detail & Related papers (2021-05-29T08:11:40Z) - Dirichlet Graph Variational Autoencoder [65.94744123832338]
We present Dirichlet Graph Variational Autoencoder (DGVAE) with graph cluster memberships as latent factors.
Motivated by the low pass characteristics in balanced graph cut, we propose a new variant of GNN named Heatts to encode the input graph into cluster memberships.
arXiv Detail & Related papers (2020-10-09T07:35:26Z) - Scalable Deep Generative Modeling for Sparse Graphs [105.60961114312686]
Existing deep neural methods require $Omega(n2)$ complexity by building up the adjacency matrix.
We develop a novel autoregressive model, named BiGG, that utilizes this sparsity to avoid generating the full adjacency matrix.
During training this autoregressive model can be parallelized with $O(log n)$ synchronization stages.
arXiv Detail & Related papers (2020-06-28T04:37:57Z) - SHADOWCAST: Controllable Graph Generation [28.839854765853953]
We introduce the controllable graph generation problem, formulated as controlling graph attributes during the generative process to produce desired graphs.
Using a transparent and straightforward Markov model to guide this generative process, practitioners can shape and understand the generated graphs.
We show its effective controllability by directing $rm Ssmall HADOWCsmall AST$ to generate hypothetical scenarios with different graph structures.
arXiv Detail & Related papers (2020-06-06T03:43:19Z) - Graph Deconvolutional Generation [3.5138314002170192]
We focus on the modern equivalent of the Erdos-Renyi random graph model: the graph variational autoencoder (GVAE)
GVAE has difficulty matching the training distribution and relies on an expensive graph matching procedure.
We improve this class of models by building a message passing neural network into GVAE's encoder and decoder.
arXiv Detail & Related papers (2020-02-14T04:37:14Z) - Revisiting Graph based Collaborative Filtering: A Linear Residual Graph
Convolutional Network Approach [55.44107800525776]
Graph Convolutional Networks (GCNs) are state-of-the-art graph based representation learning models.
In this paper, we revisit GCN based Collaborative Filtering (CF) based Recommender Systems (RS)
We show that removing non-linearities would enhance recommendation performance, consistent with the theories in simple graph convolutional networks.
We propose a residual network structure that is specifically designed for CF with user-item interaction modeling.
arXiv Detail & Related papers (2020-01-28T04:41:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.