Wasserstein diffusion on graphs with missing attributes
- URL: http://arxiv.org/abs/2102.03450v1
- Date: Sat, 6 Feb 2021 00:06:51 GMT
- Title: Wasserstein diffusion on graphs with missing attributes
- Authors: Zhixian Chen, Tengfei Ma, Yangqiu Song, Yang Wang
- Abstract summary: We propose an innovative node representation learning framework, Wasserstein graph diffusion (WGD), to mitigate the problem.
Instead of feature imputation, our method directly learns node representations from the missing-attribute graphs.
- Score: 38.153052525001264
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Missing node attributes is a common problem in real-world graphs. Graph
neural networks have been demonstrated powerful in graph representation
learning, however, they rely heavily on the completeness of graph information.
Few of them consider the incomplete node attributes, which can bring great
damage to the performance in practice. In this paper, we propose an innovative
node representation learning framework, Wasserstein graph diffusion (WGD), to
mitigate the problem. Instead of feature imputation, our method directly learns
node representations from the missing-attribute graphs. Specifically, we extend
the message passing schema in general graph neural networks to a Wasserstein
space derived from the decomposition of attribute matrices. We test WGD in node
classification tasks under two settings: missing whole attributes on some nodes
and missing only partial attributes on all nodes. In addition, we find WGD is
suitable to recover missing values and adapt it to tackle matrix completion
problems with graphs of users and items. Experimental results on both tasks
demonstrate the superiority of our method.
Related papers
- Deep Manifold Graph Auto-Encoder for Attributed Graph Embedding [51.75091298017941]
This paper proposes a novel Deep Manifold (Variational) Graph Auto-Encoder (DMVGAE/DMGAE) for attributed graph data.
The proposed method surpasses state-of-the-art baseline algorithms by a significant margin on different downstream tasks across popular datasets.
arXiv Detail & Related papers (2024-01-12T17:57:07Z) - Saliency-Aware Regularized Graph Neural Network [39.82009838086267]
We propose the Saliency-Aware Regularized Graph Neural Network (SAR-GNN) for graph classification.
We first estimate the global node saliency by measuring the semantic similarity between the compact graph representation and node features.
Then the learned saliency distribution is leveraged to regularize the neighborhood aggregation of the backbone.
arXiv Detail & Related papers (2024-01-01T13:44:16Z) - Self-supervised Heterogeneous Graph Variational Autoencoders [11.995393209449357]
Heterogeneous Information Networks (HINs) have recently demonstrated excellent performance in graph mining.
Most existing heterogeneous graph neural networks (HGNNs) ignore the problems of missing attributes, inaccurate attributes and scarce labels for nodes.
We propose a generative self-supervised model SHAVA to address these issues simultaneously.
arXiv Detail & Related papers (2023-11-14T06:15:16Z) - NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - Graph Decipher: A transparent dual-attention graph neural network to
understand the message-passing mechanism for the node classification [2.0047096160313456]
We propose a new transparent network called Graph Decipher to investigate the message-passing mechanism.
Our algorithm achieves state-of-the-art performance while imposing a substantially lower burden under the node classification task.
arXiv Detail & Related papers (2022-01-04T23:24:00Z) - Graph Entropy Guided Node Embedding Dimension Selection for Graph Neural
Networks [74.26734952400925]
We propose a novel Minimum Graph Entropy (MinGE) algorithm for Node Embedding Dimension Selection (NEDS)
MinGE considers both feature entropy and structure entropy on graphs, which are carefully designed according to the characteristics of the rich information in them.
Experiments with popular Graph Neural Networks (GNNs) on benchmark datasets demonstrate the effectiveness and generalizability of our proposed MinGE.
arXiv Detail & Related papers (2021-05-07T11:40:29Z) - Learning on Attribute-Missing Graphs [66.76561524848304]
There is a graph where attributes of only partial nodes could be available and those of the others might be entirely missing.
Existing graph learning methods including the popular GNN cannot provide satisfied learning performance.
We develop a novel distribution matching based GNN called structure-attribute transformer (SAT) for attribute-missing graphs.
arXiv Detail & Related papers (2020-11-03T11:09:52Z) - Graph Pooling with Node Proximity for Hierarchical Representation
Learning [80.62181998314547]
We propose a novel graph pooling strategy that leverages node proximity to improve the hierarchical representation learning of graph data with their multi-hop topology.
Results show that the proposed graph pooling strategy is able to achieve state-of-the-art performance on a collection of public graph classification benchmark datasets.
arXiv Detail & Related papers (2020-06-19T13:09:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.