Graph Contrastive Learning with Generative Adversarial Network
- URL: http://arxiv.org/abs/2308.00535v1
- Date: Tue, 1 Aug 2023 13:28:24 GMT
- Title: Graph Contrastive Learning with Generative Adversarial Network
- Authors: Cheng Wu, Chaokun Wang, Jingcao Xu, Ziyang Liu, Kai Zheng, Xiaowei
Wang, Yang Song, Kun Gai
- Abstract summary: Graph generative adversarial networks (GANs) learn the distribution of views for Graph Contrastive Learning (GCL)
We present GACN, a novel Generative Adversarial Contrastive learning Network for graph representation learning.
We show that GACN is able to generate high-quality augmented views for GCL and is superior to twelve state-of-the-art baseline methods.
- Score: 35.564028359355596
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Graph Neural Networks (GNNs) have demonstrated promising results on
exploiting node representations for many downstream tasks through supervised
end-to-end training. To deal with the widespread label scarcity issue in
real-world applications, Graph Contrastive Learning (GCL) is leveraged to train
GNNs with limited or even no labels by maximizing the mutual information
between nodes in its augmented views generated from the original graph.
However, the distribution of graphs remains unconsidered in view generation,
resulting in the ignorance of unseen edges in most existing literature, which
is empirically shown to be able to improve GCL's performance in our
experiments. To this end, we propose to incorporate graph generative
adversarial networks (GANs) to learn the distribution of views for GCL, in
order to i) automatically capture the characteristic of graphs for
augmentations, and ii) jointly train the graph GAN model and the GCL model.
Specifically, we present GACN, a novel Generative Adversarial Contrastive
learning Network for graph representation learning. GACN develops a view
generator and a view discriminator to generate augmented views automatically in
an adversarial style. Then, GACN leverages these views to train a GNN encoder
with two carefully designed self-supervised learning losses, including the
graph contrastive loss and the Bayesian personalized ranking Loss. Furthermore,
we design an optimization framework to train all GACN modules jointly.
Extensive experiments on seven real-world datasets show that GACN is able to
generate high-quality augmented views for GCL and is superior to twelve
state-of-the-art baseline methods. Noticeably, our proposed GACN surprisingly
discovers that the generated views in data augmentation finally conform to the
well-known preferential attachment rule in online networks.
Related papers
- GRE^2-MDCL: Graph Representation Embedding Enhanced via Multidimensional Contrastive Learning [0.0]
Graph representation learning has emerged as a powerful tool for preserving graph topology when mapping nodes to vector representations.
Current graph neural network models face the challenge of requiring extensive labeled data.
We propose Graph Representation Embedding Enhanced via Multidimensional Contrastive Learning.
arXiv Detail & Related papers (2024-09-12T03:09:05Z) - Contrastive Graph Representation Learning with Adversarial Cross-view Reconstruction and Information Bottleneck [5.707725771108279]
We propose an effective Contrastive Graph Representation Learning with Adversarial Cross-view Reconstruction and Information Bottleneck (CGRL) for node classification.
Our method significantly outperforms existing state-of-the-art algorithms.
arXiv Detail & Related papers (2024-08-01T05:45:21Z) - Self-Attention Empowered Graph Convolutional Network for Structure
Learning and Node Embedding [5.164875580197953]
In representation learning on graph-structured data, many popular graph neural networks (GNNs) fail to capture long-range dependencies.
This paper proposes a novel graph learning framework called the graph convolutional network with self-attention (GCN-SA)
The proposed scheme exhibits an exceptional generalization capability in node-level representation learning.
arXiv Detail & Related papers (2024-03-06T05:00:31Z) - DGNN: Decoupled Graph Neural Networks with Structural Consistency
between Attribute and Graph Embedding Representations [62.04558318166396]
Graph neural networks (GNNs) demonstrate a robust capability for representation learning on graphs with complex structures.
A novel GNNs framework, dubbed Decoupled Graph Neural Networks (DGNN), is introduced to obtain a more comprehensive embedding representation of nodes.
Experimental results conducted on several graph benchmark datasets verify DGNN's superiority in node classification task.
arXiv Detail & Related papers (2024-01-28T06:43:13Z) - MentorGNN: Deriving Curriculum for Pre-Training GNNs [61.97574489259085]
We propose an end-to-end model named MentorGNN that aims to supervise the pre-training process of GNNs across graphs.
We shed new light on the problem of domain adaption on relational data (i.e., graphs) by deriving a natural and interpretable upper bound on the generalization error of the pre-trained GNNs.
arXiv Detail & Related papers (2022-08-21T15:12:08Z) - Towards Unsupervised Deep Graph Structure Learning [67.58720734177325]
We propose an unsupervised graph structure learning paradigm, where the learned graph topology is optimized by data itself without any external guidance.
Specifically, we generate a learning target from the original data as an "anchor graph", and use a contrastive loss to maximize the agreement between the anchor graph and the learned graph.
arXiv Detail & Related papers (2022-01-17T11:57:29Z) - Adversarial Graph Augmentation to Improve Graph Contrastive Learning [21.54343383921459]
We propose a novel principle, termed adversarial-GCL (AD-GCL), which enables GNNs to avoid capturing redundant information during the training.
We experimentally validate AD-GCL by comparing with the state-of-the-art GCL methods and achieve performance gains of up-to $14%$ in unsupervised, $6%$ in transfer, and $3%$ in semi-supervised learning settings.
arXiv Detail & Related papers (2021-06-10T15:34:26Z) - Learning to Drop: Robust Graph Neural Network via Topological Denoising [50.81722989898142]
We propose PTDNet, a parameterized topological denoising network, to improve the robustness and generalization performance of Graph Neural Networks (GNNs)
PTDNet prunes task-irrelevant edges by penalizing the number of edges in the sparsified graph with parameterized networks.
We show that PTDNet can improve the performance of GNNs significantly and the performance gain becomes larger for more noisy datasets.
arXiv Detail & Related papers (2020-11-13T18:53:21Z) - Graph Contrastive Learning with Augmentations [109.23158429991298]
We propose a graph contrastive learning (GraphCL) framework for learning unsupervised representations of graph data.
We show that our framework can produce graph representations of similar or better generalizability, transferrability, and robustness compared to state-of-the-art methods.
arXiv Detail & Related papers (2020-10-22T20:13:43Z) - Contrastive and Generative Graph Convolutional Networks for Graph-based
Semi-Supervised Learning [64.98816284854067]
Graph-based Semi-Supervised Learning (SSL) aims to transfer the labels of a handful of labeled data to the remaining massive unlabeled data via a graph.
A novel GCN-based SSL algorithm is presented in this paper to enrich the supervision signals by utilizing both data similarities and graph structure.
arXiv Detail & Related papers (2020-09-15T13:59:28Z) - Self-Constructing Graph Convolutional Networks for Semantic Labeling [23.623276007011373]
We propose a novel architecture called the Self-Constructing Graph (SCG), which makes use of learnable latent variables to generate embeddings.
SCG can automatically obtain optimized non-local context graphs from complex-shaped objects in aerial imagery.
We demonstrate the effectiveness and flexibility of the proposed SCG on the publicly available ISPRS Vaihingen dataset.
arXiv Detail & Related papers (2020-03-15T21:55:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.