Directed Graph Auto-Encoders
- URL: http://arxiv.org/abs/2202.12449v1
- Date: Fri, 25 Feb 2022 01:19:47 GMT
- Title: Directed Graph Auto-Encoders
- Authors: Georgios Kollias, Vasileios Kalantzis, Tsuyoshi Id\'e, Aur\'elie
Lozano, Naoki Abe
- Abstract summary: We introduce a new class of auto-encoders for directed graphs motivated by a direct extension of the Weisfeiler-Leman algorithm to pairs of node labels.
We demonstrate the ability of the proposed model to learn meaningful latent embeddings and achieve superior performance on the directed link prediction task.
- Score: 3.2873782624127843
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce a new class of auto-encoders for directed graphs, motivated by a
direct extension of the Weisfeiler-Leman algorithm to pairs of node labels. The
proposed model learns pairs of interpretable latent representations for the
nodes of directed graphs, and uses parameterized graph convolutional network
(GCN) layers for its encoder and an asymmetric inner product decoder.
Parameters in the encoder control the weighting of representations exchanged
between neighboring nodes. We demonstrate the ability of the proposed model to
learn meaningful latent embeddings and achieve superior performance on the
directed link prediction task on several popular network datasets.
Related papers
- Scalable Weibull Graph Attention Autoencoder for Modeling Document Networks [50.42343781348247]
We develop a graph Poisson factor analysis (GPFA) which provides analytic conditional posteriors to improve the inference accuracy.
We also extend GPFA to a multi-stochastic-layer version named graph Poisson gamma belief network (GPGBN) to capture the hierarchical document relationships at multiple semantic levels.
Our models can extract high-quality hierarchical latent document representations and achieve promising performance on various graph analytic tasks.
arXiv Detail & Related papers (2024-10-13T02:22:14Z) - Signed Graph Autoencoder for Explainable and Polarization-Aware Network Embeddings [20.77134976354226]
Signed Graph Archetypal Autoencoder (SGAAE) framework designed for signed networks.
SGAAE extracts node-level representations that express node memberships over distinct extreme profiles.
Model achieves high performance in different tasks of signed link prediction across four real-world datasets.
arXiv Detail & Related papers (2024-09-16T16:40:40Z) - Graph Transformer GANs with Graph Masked Modeling for Architectural
Layout Generation [153.92387500677023]
We present a novel graph Transformer generative adversarial network (GTGAN) to learn effective graph node relations.
The proposed graph Transformer encoder combines graph convolutions and self-attentions in a Transformer to model both local and global interactions.
We also propose a novel self-guided pre-training method for graph representation learning.
arXiv Detail & Related papers (2024-01-15T14:36:38Z) - Network Alignment with Transferable Graph Autoencoders [79.89704126746204]
We propose a novel graph autoencoder architecture designed to extract powerful and robust node embeddings.
We prove that the generated embeddings are associated with the eigenvalues and eigenvectors of the graphs.
Our proposed framework also leverages transfer learning and data augmentation to achieve efficient network alignment at a very large scale without retraining.
arXiv Detail & Related papers (2023-10-05T02:58:29Z) - Interpretable Node Representation with Attribute Decoding [20.591882093727413]
We show that attribute decoding is important for node representation learning.
We propose a new learning model, interpretable NOde Representation with Attribute Decoding (NORAD)
arXiv Detail & Related papers (2022-12-03T20:20:24Z) - Dynamic Graph Message Passing Networks for Visual Recognition [112.49513303433606]
Modelling long-range dependencies is critical for scene understanding tasks in computer vision.
A fully-connected graph is beneficial for such modelling, but its computational overhead is prohibitive.
We propose a dynamic graph message passing network, that significantly reduces the computational complexity.
arXiv Detail & Related papers (2022-09-20T14:41:37Z) - Barlow Graph Auto-Encoder for Unsupervised Network Embedding [6.900303913555705]
We propose Barlow Graph Auto-Encoder, a simple yet effective architecture for learning network embedding.
It aims to maximize the similarity between the embedding vectors of immediate and larger neighborhoods of a node, while minimizing the redundancy between the components of these projections.
Our approach yields promising results for inductive link prediction and is also on par with state of the art for clustering and downstream node classification.
arXiv Detail & Related papers (2021-10-29T12:30:43Z) - Adaptive Multi-layer Contrastive Graph Neural Networks [11.44053611893603]
We present Adaptive Multi-layer Contrastive Graph Neural Networks (AMC-GNN), a self-supervised learning framework for Graph Neural Network.
AMC-GNN generates two graph views by data augmentation and compares different layers' output embeddings of Graph Neural Network encoders to obtain feature representations.
arXiv Detail & Related papers (2021-09-29T03:00:14Z) - Data-Driven Learning of Geometric Scattering Networks [74.3283600072357]
We propose a new graph neural network (GNN) module based on relaxations of recently proposed geometric scattering transforms.
Our learnable geometric scattering (LEGS) module enables adaptive tuning of the wavelets to encourage band-pass features to emerge in learned representations.
arXiv Detail & Related papers (2020-10-06T01:20:27Z) - AEGCN: An Autoencoder-Constrained Graph Convolutional Network [5.023274927781062]
We propose a novel neural network architecture, called autoencoder-constrained graph convolutional network.
The core of this model is a convolutional network operating directly on graphs, whose hidden layers are constrained by an autoencoder.
We show that adding autoencoder constraints significantly improves the performance of graph convolutional networks.
arXiv Detail & Related papers (2020-07-03T16:42:55Z) - Binarized Graph Neural Network [65.20589262811677]
We develop a binarized graph neural network to learn the binary representations of the nodes with binary network parameters.
Our proposed method can be seamlessly integrated into the existing GNN-based embedding approaches.
Experiments indicate that the proposed binarized graph neural network, namely BGN, is orders of magnitude more efficient in terms of both time and space.
arXiv Detail & Related papers (2020-04-19T09:43:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.