Variational Graph Normalized Auto-Encoders
- URL: http://arxiv.org/abs/2108.08046v1
- Date: Wed, 18 Aug 2021 08:56:04 GMT
- Title: Variational Graph Normalized Auto-Encoders
- Authors: Seong Jin Ahn, Myoung Ho Kim
- Abstract summary: We show that graph autoencoders (GAEs) and variational graph autoencoders (VGAEs) do not work well in link predictions when a node whose degree is zero is involved.
We have found that GAEs/VGAEs make embeddings of isolated nodes close to zero regardless of their content features.
In this paper, we propose a novel Variational Graph Normalized AutoEncoder (VGNAE) that utilize $L$-normalization to derive better embeddings for isolated nodes.
- Score: 4.416484585765027
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Link prediction is one of the key problems for graph-structured data. With
the advancement of graph neural networks, graph autoencoders (GAEs) and
variational graph autoencoders (VGAEs) have been proposed to learn graph
embeddings in an unsupervised way. It has been shown that these methods are
effective for link prediction tasks. However, they do not work well in link
predictions when a node whose degree is zero (i.g., isolated node) is involved.
We have found that GAEs/VGAEs make embeddings of isolated nodes close to zero
regardless of their content features. In this paper, we propose a novel
Variational Graph Normalized AutoEncoder (VGNAE) that utilize
$L_2$-normalization to derive better embeddings for isolated nodes. We show
that our VGNAEs outperform the existing state-of-the-art models for link
prediction tasks. The code is available at
https://github.com/SeongJinAhn/VGNAE.
Related papers
- ADA-GAD: Anomaly-Denoised Autoencoders for Graph Anomaly Detection [84.0718034981805]
We introduce a novel framework called Anomaly-Denoised Autoencoders for Graph Anomaly Detection (ADA-GAD)
In the first stage, we design a learning-free anomaly-denoised augmentation method to generate graphs with reduced anomaly levels.
In the next stage, the decoders are retrained for detection on the original graph.
arXiv Detail & Related papers (2023-12-22T09:02:01Z) - Learning on Graphs with Out-of-Distribution Nodes [33.141867473074264]
Graph Neural Networks (GNNs) are state-of-the-art models for performing prediction tasks on graphs.
This work defines the problem of graph learning with out-of-distribution nodes.
We propose Out-of-Distribution Graph Attention Network (OODGAT), a novel GNN model which explicitly models the interaction between different kinds of nodes.
arXiv Detail & Related papers (2023-08-13T08:10:23Z) - Self-attention Dual Embedding for Graphs with Heterophily [6.803108335002346]
A number of real-world graphs are heterophilic, and this leads to much lower classification accuracy using standard GNNs.
We design a novel GNN which is effective for both heterophilic and homophilic graphs.
We evaluate our algorithm on real-world graphs containing thousands to millions of nodes and show that we achieve state-of-the-art results.
arXiv Detail & Related papers (2023-05-28T09:38:28Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - Source Free Unsupervised Graph Domain Adaptation [60.901775859601685]
Unsupervised Graph Domain Adaptation (UGDA) shows its practical value of reducing the labeling cost for node classification.
Most existing UGDA methods heavily rely on the labeled graph in the source domain.
In some real-world scenarios, the source graph is inaccessible because of privacy issues.
We propose a novel scenario named Source Free Unsupervised Graph Domain Adaptation (SFUGDA)
arXiv Detail & Related papers (2021-12-02T03:18:18Z) - Node Feature Extraction by Self-Supervised Multi-scale Neighborhood
Prediction [123.20238648121445]
We propose a new self-supervised learning framework, Graph Information Aided Node feature exTraction (GIANT)
GIANT makes use of the eXtreme Multi-label Classification (XMC) formalism, which is crucial for fine-tuning the language model based on graph information.
We demonstrate the superior performance of GIANT over the standard GNN pipeline on Open Graph Benchmark datasets.
arXiv Detail & Related papers (2021-10-29T19:55:12Z) - Line Graph Neural Networks for Link Prediction [71.00689542259052]
We consider the graph link prediction task, which is a classic graph analytical problem with many real-world applications.
In this formalism, a link prediction problem is converted to a graph classification task.
We propose to seek a radically different and novel path by making use of the line graphs in graph theory.
In particular, each node in a line graph corresponds to a unique edge in the original graph. Therefore, link prediction problems in the original graph can be equivalently solved as a node classification problem in its corresponding line graph, instead of a graph classification task.
arXiv Detail & Related papers (2020-10-20T05:54:31Z) - Dirichlet Graph Variational Autoencoder [65.94744123832338]
We present Dirichlet Graph Variational Autoencoder (DGVAE) with graph cluster memberships as latent factors.
Motivated by the low pass characteristics in balanced graph cut, we propose a new variant of GNN named Heatts to encode the input graph into cluster memberships.
arXiv Detail & Related papers (2020-10-09T07:35:26Z) - A comparative study of similarity-based and GNN-based link prediction
approaches [1.0441880303257467]
The graph neural network (GNN) is able to learn hidden features from graphs which can be used for link prediction task in graphs.
This paper studies some similarity and GNN-based link prediction approaches in the domain of homogeneous graphs.
arXiv Detail & Related papers (2020-08-20T10:41:53Z) - Graph Deconvolutional Generation [3.5138314002170192]
We focus on the modern equivalent of the Erdos-Renyi random graph model: the graph variational autoencoder (GVAE)
GVAE has difficulty matching the training distribution and relies on an expensive graph matching procedure.
We improve this class of models by building a message passing neural network into GVAE's encoder and decoder.
arXiv Detail & Related papers (2020-02-14T04:37:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.