Self-supervised Heterogeneous Graph Variational Autoencoders
- URL: http://arxiv.org/abs/2311.07929v1
- Date: Tue, 14 Nov 2023 06:15:16 GMT
- Title: Self-supervised Heterogeneous Graph Variational Autoencoders
- Authors: Yige Zhao, Jianxiang Yu, Yao Cheng, Chengcheng Yu, Yiding Liu, Xiang
Li, Shuaiqiang Wang
- Abstract summary: Heterogeneous Information Networks (HINs) have recently demonstrated excellent performance in graph mining.
Most existing heterogeneous graph neural networks (HGNNs) ignore the problems of missing attributes, inaccurate attributes and scarce labels for nodes.
We propose a generative self-supervised model SHAVA to address these issues simultaneously.
- Score: 11.995393209449357
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Heterogeneous Information Networks (HINs), which consist of various types of
nodes and edges, have recently demonstrated excellent performance in graph
mining. However, most existing heterogeneous graph neural networks (HGNNs)
ignore the problems of missing attributes, inaccurate attributes and scarce
labels for nodes, which limits their expressiveness. In this paper, we propose
a generative self-supervised model SHAVA to address these issues
simultaneously. Specifically, SHAVA first initializes all the nodes in the
graph with a low-dimensional representation matrix. After that, based on the
variational graph autoencoder framework, SHAVA learns both node-level and
attribute-level embeddings in the encoder, which can provide fine-grained
semantic information to construct node attributes. In the decoder, SHAVA
reconstructs both links and attributes. Instead of directly reconstructing raw
features for attributed nodes, SHAVA generates the initial low-dimensional
representation matrix for all the nodes, based on which raw features of
attributed nodes are further reconstructed to leverage accurate attributes. In
this way, SHAVA can not only complete informative features for non-attributed
nodes, but rectify inaccurate ones for attributed nodes. Finally, we conduct
extensive experiments to show the superiority of SHAVA in tackling HINs with
missing and inaccurate attributes.
Related papers
- AGHINT: Attribute-Guided Representation Learning on Heterogeneous Information Networks with Transformer [4.01252998015631]
We investigate the impact of inter-node attribute disparities on HGNNs performance within a benchmark task.
We propose a novel Attribute-Guided heterogeneous Information Networks representation learning model with Transformer (AGHINT)
AGHINT transcends the constraints of the original graph structure by directly integrating higher-order similar neighbor features into the learning process.
arXiv Detail & Related papers (2024-04-16T10:30:48Z) - Deep Manifold Graph Auto-Encoder for Attributed Graph Embedding [51.75091298017941]
This paper proposes a novel Deep Manifold (Variational) Graph Auto-Encoder (DMVGAE/DMGAE) for attributed graph data.
The proposed method surpasses state-of-the-art baseline algorithms by a significant margin on different downstream tasks across popular datasets.
arXiv Detail & Related papers (2024-01-12T17:57:07Z) - Degree-based stratification of nodes in Graph Neural Networks [66.17149106033126]
We modify the Graph Neural Network (GNN) architecture so that the weight matrices are learned, separately, for the nodes in each group.
This simple-to-implement modification seems to improve performance across datasets and GNN methods.
arXiv Detail & Related papers (2023-12-16T14:09:23Z) - GraphPatcher: Mitigating Degree Bias for Graph Neural Networks via
Test-time Augmentation [48.88356355021239]
Graph neural networks (GNNs) usually perform satisfactorily on high-degree nodes with rich neighbor information but struggle with low-degree nodes.
We propose a test-time augmentation framework, namely GraphPatcher, to enhance test-time generalization of any GNNs on low-degree nodes.
GraphPatcher consistently enhances common GNNs' overall performance by up to 3.6% and low-degree performance by up to 6.5%, significantly outperforming state-of-the-art baselines.
arXiv Detail & Related papers (2023-10-01T21:50:03Z) - NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - Seq-HGNN: Learning Sequential Node Representation on Heterogeneous Graph [57.2953563124339]
We propose a novel heterogeneous graph neural network with sequential node representation, namely Seq-HGNN.
We conduct extensive experiments on four widely used datasets from Heterogeneous Graph Benchmark (HGB) and Open Graph Benchmark (OGB)
arXiv Detail & Related papers (2023-05-18T07:27:18Z) - Self-supervised Guided Hypergraph Feature Propagation for
Semi-supervised Classification with Missing Node Features [9.684903457117917]
We propose a self-supervised guided hypergraph feature propagation (SGHFP)
Specifically, the feature hypergraph is first generated according to the node features with missing information.
Then, the reconstructed node features are fed to a two-layer GNNs to construct a pseudo-label hypergraph.
Extensive experiments demonstrate that the proposed SGHFP outperforms the existing semi-supervised classification with missing node feature methods.
arXiv Detail & Related papers (2023-02-16T12:13:46Z) - Data Augmentation for Graph Convolutional Network on Semi-Supervised
Classification [6.619370466850894]
We study the problem of graph data augmentation for Graph Convolutional Network (GCN)
Specifically, we conduct cosine similarity based cross operation on the original features to create new graph features, including new node attributes.
We also propose an attentional integrating model to weighted sum the hidden node embeddings encoded by these GCNs into the final node embeddings.
arXiv Detail & Related papers (2021-06-16T15:13:51Z) - Higher-Order Attribute-Enhancing Heterogeneous Graph Neural Networks [67.25782890241496]
We propose a higher-order Attribute-Enhancing Graph Neural Network (HAEGNN) for heterogeneous network representation learning.
HAEGNN simultaneously incorporates meta-paths and meta-graphs for rich, heterogeneous semantics.
It shows superior performance against the state-of-the-art methods in node classification, node clustering, and visualization.
arXiv Detail & Related papers (2021-04-16T04:56:38Z) - Wasserstein diffusion on graphs with missing attributes [38.153052525001264]
We propose an innovative node representation learning framework, Wasserstein graph diffusion (WGD), to mitigate the problem.
Instead of feature imputation, our method directly learns node representations from the missing-attribute graphs.
arXiv Detail & Related papers (2021-02-06T00:06:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.