Self-Supervised Deep Graph Embedding with High-Order Information Fusion
for Community Discovery
- URL: http://arxiv.org/abs/2102.03302v2
- Date: Mon, 8 Feb 2021 06:38:49 GMT
- Title: Self-Supervised Deep Graph Embedding with High-Order Information Fusion
for Community Discovery
- Authors: Shuliang Xu, Shenglan Liu, Lin Feng
- Abstract summary: The proposed algorithm uses self-supervised mechanism and different high-order information of graph to train multiple deep graph convolution neural networks.
The outputs of multiple graph convolution neural networks are fused to extract the representations of nodes which include the attribute and structure information of a graph.
- Score: 3.6002285517472767
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep graph embedding is an important approach for community discovery. Deep
graph neural network with self-supervised mechanism can obtain the
low-dimensional embedding vectors of nodes from unlabeled and unstructured
graph data. The high-order information of graph can provide more abundant
structure information for the representation learning of nodes. However, most
self-supervised graph neural networks only use adjacency matrix as the input
topology information of graph and cannot obtain too high-order information
since the number of layers of graph neural network is fairly limited. If there
are too many layers, the phenomenon of over smoothing will appear. Therefore
how to obtain and fuse high-order information of graph by a shallow graph
neural network is an important problem. In this paper, a deep graph embedding
algorithm with self-supervised mechanism for community discovery is proposed.
The proposed algorithm uses self-supervised mechanism and different high-order
information of graph to train multiple deep graph convolution neural networks.
The outputs of multiple graph convolution neural networks are fused to extract
the representations of nodes which include the attribute and structure
information of a graph. In addition, data augmentation and negative sampling
are introduced into the training process to facilitate the improvement of
embedding result. The proposed algorithm and the comparison algorithms are
conducted on the five experimental data sets. The experimental results show
that the proposed algorithm outperforms the comparison algorithms on the most
experimental data sets. The experimental results demonstrate that the proposed
algorithm is an effective algorithm for community discovery.
Related papers
- Deep Manifold Graph Auto-Encoder for Attributed Graph Embedding [51.75091298017941]
This paper proposes a novel Deep Manifold (Variational) Graph Auto-Encoder (DMVGAE/DMGAE) for attributed graph data.
The proposed method surpasses state-of-the-art baseline algorithms by a significant margin on different downstream tasks across popular datasets.
arXiv Detail & Related papers (2024-01-12T17:57:07Z) - NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - Learning Graph Structure from Convolutional Mixtures [119.45320143101381]
We propose a graph convolutional relationship between the observed and latent graphs, and formulate the graph learning task as a network inverse (deconvolution) problem.
In lieu of eigendecomposition-based spectral methods, we unroll and truncate proximal gradient iterations to arrive at a parameterized neural network architecture that we call a Graph Deconvolution Network (GDN)
GDNs can learn a distribution of graphs in a supervised fashion, perform link prediction or edge-weight regression tasks by adapting the loss function, and they are inherently inductive.
arXiv Detail & Related papers (2022-05-19T14:08:15Z) - Optimal Propagation for Graph Neural Networks [51.08426265813481]
We propose a bi-level optimization approach for learning the optimal graph structure.
We also explore a low-rank approximation model for further reducing the time complexity.
arXiv Detail & Related papers (2022-05-06T03:37:00Z) - Graph Decipher: A transparent dual-attention graph neural network to
understand the message-passing mechanism for the node classification [2.0047096160313456]
We propose a new transparent network called Graph Decipher to investigate the message-passing mechanism.
Our algorithm achieves state-of-the-art performance while imposing a substantially lower burden under the node classification task.
arXiv Detail & Related papers (2022-01-04T23:24:00Z) - Co-embedding of Nodes and Edges with Graph Neural Networks [13.020745622327894]
Graph embedding is a way to transform and encode the data structure in high dimensional and non-Euclidean feature space.
CensNet is a general graph embedding framework, which embeds both nodes and edges to a latent feature space.
Our approach achieves or matches the state-of-the-art performance in four graph learning tasks.
arXiv Detail & Related papers (2020-10-25T22:39:31Z) - Towards Deeper Graph Neural Networks [63.46470695525957]
Graph convolutions perform neighborhood aggregation and represent one of the most important graph operations.
Several recent studies attribute this performance deterioration to the over-smoothing issue.
We propose Deep Adaptive Graph Neural Network (DAGNN) to adaptively incorporate information from large receptive fields.
arXiv Detail & Related papers (2020-07-18T01:11:14Z) - Graph Pooling with Node Proximity for Hierarchical Representation
Learning [80.62181998314547]
We propose a novel graph pooling strategy that leverages node proximity to improve the hierarchical representation learning of graph data with their multi-hop topology.
Results show that the proposed graph pooling strategy is able to achieve state-of-the-art performance on a collection of public graph classification benchmark datasets.
arXiv Detail & Related papers (2020-06-19T13:09:44Z) - Unsupervised Graph Representation by Periphery and Hierarchical
Information Maximization [18.7475578342125]
Invent of graph neural networks has improved the state-of-the-art for both node and the entire graph representation in a vector space.
For the entire graph representation, most of existing graph neural networks are trained on a graph classification loss in a supervised way.
We propose an unsupervised graph neural network to generate a vector representation of an entire graph in this paper.
arXiv Detail & Related papers (2020-06-08T15:50:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.