Self-supervised Heterogeneous Graph Variational Autoencoders
- URL: http://arxiv.org/abs/2311.07929v1
- Date: Tue, 14 Nov 2023 06:15:16 GMT
- Title: Self-supervised Heterogeneous Graph Variational Autoencoders
- Authors: Yige Zhao, Jianxiang Yu, Yao Cheng, Chengcheng Yu, Yiding Liu, Xiang
Li, Shuaiqiang Wang
- Abstract summary: Heterogeneous Information Networks (HINs) have recently demonstrated excellent performance in graph mining.
Most existing heterogeneous graph neural networks (HGNNs) ignore the problems of missing attributes, inaccurate attributes and scarce labels for nodes.
We propose a generative self-supervised model SHAVA to address these issues simultaneously.
- Score: 11.995393209449357
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Heterogeneous Information Networks (HINs), which consist of various types of
nodes and edges, have recently demonstrated excellent performance in graph
mining. However, most existing heterogeneous graph neural networks (HGNNs)
ignore the problems of missing attributes, inaccurate attributes and scarce
labels for nodes, which limits their expressiveness. In this paper, we propose
a generative self-supervised model SHAVA to address these issues
simultaneously. Specifically, SHAVA first initializes all the nodes in the
graph with a low-dimensional representation matrix. After that, based on the
variational graph autoencoder framework, SHAVA learns both node-level and
attribute-level embeddings in the encoder, which can provide fine-grained
semantic information to construct node attributes. In the decoder, SHAVA
reconstructs both links and attributes. Instead of directly reconstructing raw
features for attributed nodes, SHAVA generates the initial low-dimensional
representation matrix for all the nodes, based on which raw features of
attributed nodes are further reconstructed to leverage accurate attributes. In
this way, SHAVA can not only complete informative features for non-attributed
nodes, but rectify inaccurate ones for attributed nodes. Finally, we conduct
extensive experiments to show the superiority of SHAVA in tackling HINs with
missing and inaccurate attributes.
Related papers
- Divide-Then-Rule: A Cluster-Driven Hierarchical Interpolator for Attribute-Missing Graphs [51.13363550716544]
Deep graph clustering is an unsupervised task aimed at partitioning nodes with incomplete attributes into distinct clusters.<n>Existing imputation methods for attribute-missing graphs often fail to account for the varying amounts of information available across node neighborhoods.<n>We propose Divide-Then-Rule Graph Completion (DTRGC) to address this issue.
arXiv Detail & Related papers (2025-07-12T03:33:19Z) - Discrepancy-Aware Graph Mask Auto-Encoder [14.452589880736523]
Masked Graph Auto-Encoder, a powerful graph self-supervised training paradigm, has recently shown superior performance in graph representation learning.<n>We propose a Discrepancy-Aware Graph Mask Auto-Encoder (DGMAE)<n>It obtains more distinguishable node representations by reconstructing the discrepancy information of neighboring nodes during the masking process.
arXiv Detail & Related papers (2025-06-24T06:15:44Z) - Deep Manifold Graph Auto-Encoder for Attributed Graph Embedding [51.75091298017941]
This paper proposes a novel Deep Manifold (Variational) Graph Auto-Encoder (DMVGAE/DMGAE) for attributed graph data.
The proposed method surpasses state-of-the-art baseline algorithms by a significant margin on different downstream tasks across popular datasets.
arXiv Detail & Related papers (2024-01-12T17:57:07Z) - Saliency-Aware Regularized Graph Neural Network [39.82009838086267]
We propose the Saliency-Aware Regularized Graph Neural Network (SAR-GNN) for graph classification.
We first estimate the global node saliency by measuring the semantic similarity between the compact graph representation and node features.
Then the learned saliency distribution is leveraged to regularize the neighborhood aggregation of the backbone.
arXiv Detail & Related papers (2024-01-01T13:44:16Z) - Degree-based stratification of nodes in Graph Neural Networks [66.17149106033126]
We modify the Graph Neural Network (GNN) architecture so that the weight matrices are learned, separately, for the nodes in each group.
This simple-to-implement modification seems to improve performance across datasets and GNN methods.
arXiv Detail & Related papers (2023-12-16T14:09:23Z) - NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - Seq-HGNN: Learning Sequential Node Representation on Heterogeneous Graph [57.2953563124339]
We propose a novel heterogeneous graph neural network with sequential node representation, namely Seq-HGNN.
We conduct extensive experiments on four widely used datasets from Heterogeneous Graph Benchmark (HGB) and Open Graph Benchmark (OGB)
arXiv Detail & Related papers (2023-05-18T07:27:18Z) - Local Graph Embeddings Based on Neighbors Degree Frequency of Nodes [0.0]
We propose a strategy for graph machine learning and network analysis by defining certain local features and vector representations of nodes.
By extending the notion of the degree of a node via Breath-First Search, a general family of bf centrality functions is defined.
We show that centrality and closeness can be learned by applying deep learning to these local features.
arXiv Detail & Related papers (2022-07-30T07:07:30Z) - Position-based Hash Embeddings For Scaling Graph Neural Networks [8.87527266373087]
Graph Neural Networks (GNNs) compute node representations by taking into account the topology of the node's ego-network and the features of the ego-network's nodes.
When the nodes do not have high-quality features, GNNs learn an embedding layer to compute node embeddings and use them as input features.
To reduce the memory associated with this embedding layer, hashing-based approaches, commonly used in applications like NLP and recommender systems, can potentially be used.
We present approaches that take advantage of the nodes' position in the graph to dramatically reduce the memory required.
arXiv Detail & Related papers (2021-08-31T22:42:25Z) - Data Augmentation for Graph Convolutional Network on Semi-Supervised
Classification [6.619370466850894]
We study the problem of graph data augmentation for Graph Convolutional Network (GCN)
Specifically, we conduct cosine similarity based cross operation on the original features to create new graph features, including new node attributes.
We also propose an attentional integrating model to weighted sum the hidden node embeddings encoded by these GCNs into the final node embeddings.
arXiv Detail & Related papers (2021-06-16T15:13:51Z) - Higher-Order Attribute-Enhancing Heterogeneous Graph Neural Networks [67.25782890241496]
We propose a higher-order Attribute-Enhancing Graph Neural Network (HAEGNN) for heterogeneous network representation learning.
HAEGNN simultaneously incorporates meta-paths and meta-graphs for rich, heterogeneous semantics.
It shows superior performance against the state-of-the-art methods in node classification, node clustering, and visualization.
arXiv Detail & Related papers (2021-04-16T04:56:38Z) - Wasserstein diffusion on graphs with missing attributes [38.153052525001264]
We propose an innovative node representation learning framework, Wasserstein graph diffusion (WGD), to mitigate the problem.
Instead of feature imputation, our method directly learns node representations from the missing-attribute graphs.
arXiv Detail & Related papers (2021-02-06T00:06:51Z) - Modeling Graph Structure via Relative Position for Text Generation from
Knowledge Graphs [54.176285420428776]
We present Graformer, a novel Transformer-based encoder-decoder architecture for graph-to-text generation.
With our novel graph self-attention, the encoding of a node relies on all nodes in the input graph - not only direct neighbors - facilitating the detection of global patterns.
Graformer learns to weight these node-node relations differently for different attention heads, thus virtually learning differently connected views of the input graph.
arXiv Detail & Related papers (2020-06-16T15:20:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.