Adversarial Cross-View Disentangled Graph Contrastive Learning
- URL: http://arxiv.org/abs/2209.07699v1
- Date: Fri, 16 Sep 2022 03:48:39 GMT
- Title: Adversarial Cross-View Disentangled Graph Contrastive Learning
- Authors: Qianlong Wen, Zhongyu Ouyang, Chunhui Zhang, Yiyue Qian, Yanfang Ye,
Chuxu Zhang
- Abstract summary: We introduce ACDGCL, which follows the information bottleneck principle to learn minimal yet sufficient representations from graph data.
We empirically demonstrate that our proposed model outperforms the state-of-the-arts on graph classification task over multiple benchmark datasets.
- Score: 30.97720522293301
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph contrastive learning (GCL) is prevalent to tackle the supervision
shortage issue in graph learning tasks. Many recent GCL methods have been
proposed with various manually designed augmentation techniques, aiming to
implement challenging augmentations on the original graph to yield robust
representation. Although many of them achieve remarkable performances, existing
GCL methods still struggle to improve model robustness without risking losing
task-relevant information because they ignore the fact the augmentation-induced
latent factors could be highly entangled with the original graph, thus it is
more difficult to discriminate the task-relevant information from irrelevant
information. Consequently, the learned representation is either brittle or
unilluminating. In light of this, we introduce the Adversarial Cross-View
Disentangled Graph Contrastive Learning (ACDGCL), which follows the information
bottleneck principle to learn minimal yet sufficient representations from graph
data. To be specific, our proposed model elicits the augmentation-invariant and
augmentation-dependent factors separately. Except for the conventional
contrastive loss which guarantees the consistency and sufficiency of the
representations across different contrastive views, we introduce a cross-view
reconstruction mechanism to pursue the representation disentanglement. Besides,
an adversarial view is added as the third view of contrastive loss to enhance
model robustness. We empirically demonstrate that our proposed model
outperforms the state-of-the-arts on graph classification task over multiple
benchmark datasets.
Related papers
- Uncovering Capabilities of Model Pruning in Graph Contrastive Learning [0.0]
We reformulate the problem of graph contrastive learning via contrasting different model versions rather than augmented views.
We extensively validate our method on various benchmarks regarding graph classification via unsupervised and transfer learning.
arXiv Detail & Related papers (2024-10-27T07:09:31Z) - Dual Adversarial Perturbators Generate rich Views for Recommendation [16.284670207195056]
AvoGCL emulates curriculum learning by applying adversarial training to graph structures and embedding perturbations.
Experiments on three real-world datasets demonstrate that AvoGCL significantly outperforms the state-of-the-art competitors.
arXiv Detail & Related papers (2024-08-26T15:19:35Z) - Disentangled Generative Graph Representation Learning [51.59824683232925]
This paper introduces DiGGR (Disentangled Generative Graph Representation Learning), a self-supervised learning framework.
It aims to learn latent disentangled factors and utilize them to guide graph mask modeling.
Experiments on 11 public datasets for two different graph learning tasks demonstrate that DiGGR consistently outperforms many previous self-supervised methods.
arXiv Detail & Related papers (2024-08-24T05:13:02Z) - Multi-Task Curriculum Graph Contrastive Learning with Clustering Entropy Guidance [25.5510013711661]
We propose the Clustering-guided Curriculum Graph contrastive Learning (CCGL) framework.
CCGL uses clustering entropy as the guidance of the following graph augmentation and contrastive learning.
Experimental results demonstrate that CCGL has achieved excellent performance compared to state-of-the-art competitors.
arXiv Detail & Related papers (2024-08-22T02:18:47Z) - GPS: Graph Contrastive Learning via Multi-scale Augmented Views from
Adversarial Pooling [23.450755275125577]
Self-supervised graph representation learning has recently shown considerable promise in a range of fields, including bioinformatics and social networks.
We present a novel approach named Graph Pooling ContraSt (GPS) to address these issues.
Motivated by the fact that graph pooling can adaptively coarsen the graph with the removal of redundancy, we rethink graph pooling and leverage it to automatically generate multi-scale positive views.
arXiv Detail & Related papers (2024-01-29T10:00:53Z) - GIF: A General Graph Unlearning Strategy via Influence Function [63.52038638220563]
Graph Influence Function (GIF) is a model-agnostic unlearning method that can efficiently and accurately estimate parameter changes in response to a $epsilon$-mass perturbation in deleted data.
We conduct extensive experiments on four representative GNN models and three benchmark datasets to justify GIF's superiority in terms of unlearning efficacy, model utility, and unlearning efficiency.
arXiv Detail & Related papers (2023-04-06T03:02:54Z) - GraphCoCo: Graph Complementary Contrastive Learning [65.89743197355722]
Graph Contrastive Learning (GCL) has shown promising performance in graph representation learning (GRL) without the supervision of manual annotations.
This paper proposes an effective graph complementary contrastive learning approach named GraphCoCo to tackle the above issue.
arXiv Detail & Related papers (2022-03-24T02:58:36Z) - ACTIVE:Augmentation-Free Graph Contrastive Learning for Partial
Multi-View Clustering [52.491074276133325]
We propose an augmentation-free graph contrastive learning framework to solve the problem of partial multi-view clustering.
The proposed approach elevates instance-level contrastive learning and missing data inference to the cluster-level, effectively mitigating the impact of individual missing data on clustering.
arXiv Detail & Related papers (2022-03-01T02:32:25Z) - Adversarial Graph Contrastive Learning with Information Regularization [51.14695794459399]
Contrastive learning is an effective method in graph representation learning.
Data augmentation on graphs is far less intuitive and much harder to provide high-quality contrastive samples.
We propose a simple but effective method, Adversarial Graph Contrastive Learning (ARIEL)
It consistently outperforms the current graph contrastive learning methods in the node classification task over various real-world datasets.
arXiv Detail & Related papers (2022-02-14T05:54:48Z) - Diversified Multiscale Graph Learning with Graph Self-Correction [55.43696999424127]
We propose a diversified multiscale graph learning model equipped with two core ingredients.
A graph self-correction (GSC) mechanism to generate informative embedded graphs, and a diversity boosting regularizer (DBR) to achieve a comprehensive characterization of the input graph.
Experiments on popular graph classification benchmarks show that the proposed GSC mechanism leads to significant improvements over state-of-the-art graph pooling methods.
arXiv Detail & Related papers (2021-03-17T16:22:24Z) - Graph Contrastive Learning with Adaptive Augmentation [23.37786673825192]
We propose a novel graph contrastive representation learning method with adaptive augmentation.
Specifically, we design augmentation schemes based on node centrality measures to highlight important connective structures.
Our proposed method consistently outperforms existing state-of-the-art baselines and even surpasses some supervised counterparts.
arXiv Detail & Related papers (2020-10-27T15:12:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.