Self-Supervised Graph Representation Learning via Topology
Transformations
- URL: http://arxiv.org/abs/2105.11689v1
- Date: Tue, 25 May 2021 06:11:03 GMT
- Title: Self-Supervised Graph Representation Learning via Topology
Transformations
- Authors: Xiang Gao, Wei Hu, Guo-Jun Qi
- Abstract summary: We present the Topology Transformation Equivariant Representation learning, a general paradigm of self-supervised learning for node representations of graph data.
In experiments, we apply the proposed model to the downstream node and graph classification tasks, and results show that the proposed method outperforms the state-of-the-art unsupervised approaches.
- Score: 61.870882736758624
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present the Topology Transformation Equivariant Representation learning, a
general paradigm of self-supervised learning for node representations of graph
data to enable the wide applicability of Graph Convolutional Neural Networks
(GCNNs). We formalize the proposed model from an information-theoretic
perspective, by maximizing the mutual information between topology
transformations and node representations before and after the transformations.
We derive that maximizing such mutual information can be relaxed to minimizing
the cross entropy between the applied topology transformation and its
estimation from node representations. In particular, we seek to sample a subset
of node pairs from the original graph and flip the edge connectivity between
each pair to transform the graph topology. Then, we self-train a representation
encoder to learn node representations by reconstructing the topology
transformations from the feature representations of the original and
transformed graphs. In experiments, we apply the proposed model to the
downstream node and graph classification tasks, and results show that the
proposed method outperforms the state-of-the-art unsupervised approaches.
Related papers
- Creating generalizable downstream graph models with random projections [22.690120515637854]
We investigate graph representation learning approaches that enable models to generalize across graphs.
We show that using random projections to estimate multiple powers of the transition matrix allows us to build a set of isomorphism-invariant features.
The resulting features can be used to recover enough information about the local neighborhood of a node to enable inference with relevance competitive to other approaches.
arXiv Detail & Related papers (2023-02-17T14:27:00Z) - GrannGAN: Graph annotation generative adversarial networks [72.66289932625742]
We consider the problem of modelling high-dimensional distributions and generating new examples of data with complex relational feature structure coherent with a graph skeleton.
The model we propose tackles the problem of generating the data features constrained by the specific graph structure of each data point by splitting the task into two phases.
In the first it models the distribution of features associated with the nodes of the given graph, in the second it complements the edge features conditionally on the node features.
arXiv Detail & Related papers (2022-12-01T11:49:07Z) - Interpolation-based Correlation Reduction Network for Semi-Supervised
Graph Learning [49.94816548023729]
We propose a novel graph contrastive learning method, termed Interpolation-based Correlation Reduction Network (ICRN)
In our method, we improve the discriminative capability of the latent feature by enlarging the margin of decision boundaries.
By combining the two settings, we extract rich supervision information from both the abundant unlabeled nodes and the rare yet valuable labeled nodes for discnative representation learning.
arXiv Detail & Related papers (2022-06-06T14:26:34Z) - Revisiting Transformation Invariant Geometric Deep Learning: Are Initial
Representations All You Need? [80.86819657126041]
We show that transformation-invariant and distance-preserving initial representations are sufficient to achieve transformation invariance.
Specifically, we realize transformation-invariant and distance-preserving initial point representations by modifying multi-dimensional scaling.
We prove that TinvNN can strictly guarantee transformation invariance, being general and flexible enough to be combined with the existing neural networks.
arXiv Detail & Related papers (2021-12-23T03:52:33Z) - Heterogeneous Graph Neural Network with Multi-view Representation
Learning [16.31723570596291]
We propose a Heterogeneous Graph Neural Network with Multi-View Representation Learning (MV-HetGNN) for heterogeneous graph embedding.
The proposed model consists of node feature transformation, view-specific ego graph encoding and auto multi-view fusion to thoroughly learn complex structural and semantic information for generating comprehensive node representations.
Extensive experiments on three real-world heterogeneous graph datasets show that the proposed MV-HetGNN model consistently outperforms all the state-of-the-art GNN baselines in various downstream tasks.
arXiv Detail & Related papers (2021-08-31T07:18:48Z) - Equivariance-bridged SO(2)-Invariant Representation Learning using Graph
Convolutional Network [0.1657441317977376]
Training a Convolutional Neural Network (CNN) to be robust against rotation has mostly been done with data augmentation.
This paper highlights to encourage less dependence on data augmentation by achieving structural rotational invariance of a network.
Our method achieves the state-of-the-art image classification performance on rotated MNIST and CIFAR-10 images.
arXiv Detail & Related papers (2021-06-18T08:37:45Z) - Heterogeneous Graph Representation Learning with Relation Awareness [45.14314180743549]
We propose a Relation-aware Heterogeneous Graph Neural Network, namely R-HGNN, to learn node representations on heterogeneous graphs at a fine-grained level.
A dedicated graph convolution component is first designed to learn unique node representations from each relation-specific graph.
A cross-relation message passing module is developed to improve the interactions of node representations across different relations.
arXiv Detail & Related papers (2021-05-24T07:01:41Z) - Topological Regularization for Graph Neural Networks Augmentation [12.190045459064413]
We propose a feature augmentation method for graph nodes based on topological regularization.
We have carried out extensive experiments on a large number of datasets to prove the effectiveness of our model.
arXiv Detail & Related papers (2021-04-03T01:37:44Z) - Joint Network Topology Inference via Structured Fusion Regularization [70.30364652829164]
Joint network topology inference represents a canonical problem of learning multiple graph Laplacian matrices from heterogeneous graph signals.
We propose a general graph estimator based on a novel structured fusion regularization.
We show that the proposed graph estimator enjoys both high computational efficiency and rigorous theoretical guarantee.
arXiv Detail & Related papers (2021-03-05T04:42:32Z) - Building powerful and equivariant graph neural networks with structural
message-passing [74.93169425144755]
We propose a powerful and equivariant message-passing framework based on two ideas.
First, we propagate a one-hot encoding of the nodes, in addition to the features, in order to learn a local context matrix around each node.
Second, we propose methods for the parametrization of the message and update functions that ensure permutation equivariance.
arXiv Detail & Related papers (2020-06-26T17:15:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.