GPS: Graph Contrastive Learning via Multi-scale Augmented Views from
Adversarial Pooling
- URL: http://arxiv.org/abs/2401.16011v1
- Date: Mon, 29 Jan 2024 10:00:53 GMT
- Title: GPS: Graph Contrastive Learning via Multi-scale Augmented Views from
Adversarial Pooling
- Authors: Wei Ju, Yiyang Gu, Zhengyang Mao, Ziyue Qiao, Yifang Qin, Xiao Luo,
Hui Xiong, and Ming Zhang
- Abstract summary: Self-supervised graph representation learning has recently shown considerable promise in a range of fields, including bioinformatics and social networks.
We present a novel approach named Graph Pooling ContraSt (GPS) to address these issues.
Motivated by the fact that graph pooling can adaptively coarsen the graph with the removal of redundancy, we rethink graph pooling and leverage it to automatically generate multi-scale positive views.
- Score: 23.450755275125577
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Self-supervised graph representation learning has recently shown considerable
promise in a range of fields, including bioinformatics and social networks. A
large number of graph contrastive learning approaches have shown promising
performance for representation learning on graphs, which train models by
maximizing agreement between original graphs and their augmented views (i.e.,
positive views). Unfortunately, these methods usually involve pre-defined
augmentation strategies based on the knowledge of human experts. Moreover,
these strategies may fail to generate challenging positive views to provide
sufficient supervision signals. In this paper, we present a novel approach
named Graph Pooling ContraSt (GPS) to address these issues. Motivated by the
fact that graph pooling can adaptively coarsen the graph with the removal of
redundancy, we rethink graph pooling and leverage it to automatically generate
multi-scale positive views with varying emphasis on providing challenging
positives and preserving semantics, i.e., strongly-augmented view and
weakly-augmented view. Then, we incorporate both views into a joint contrastive
learning framework with similarity learning and consistency learning, where our
pooling module is adversarially trained with respect to the encoder for
adversarial robustness. Experiments on twelve datasets on both graph
classification and transfer learning tasks verify the superiority of the
proposed method over its counterparts.
Related papers
- Disentangled Generative Graph Representation Learning [51.59824683232925]
This paper introduces DiGGR (Disentangled Generative Graph Representation Learning), a self-supervised learning framework.
It aims to learn latent disentangled factors and utilize them to guide graph mask modeling.
Experiments on 11 public datasets for two different graph learning tasks demonstrate that DiGGR consistently outperforms many previous self-supervised methods.
arXiv Detail & Related papers (2024-08-24T05:13:02Z) - State of the Art and Potentialities of Graph-level Learning [54.68482109186052]
Graph-level learning has been applied to many tasks including comparison, regression, classification, and more.
Traditional approaches to learning a set of graphs rely on hand-crafted features, such as substructures.
Deep learning has helped graph-level learning adapt to the growing scale of graphs by extracting features automatically and encoding graphs into low-dimensional representations.
arXiv Detail & Related papers (2023-01-14T09:15:49Z) - Adversarial Graph Contrastive Learning with Information Regularization [51.14695794459399]
Contrastive learning is an effective method in graph representation learning.
Data augmentation on graphs is far less intuitive and much harder to provide high-quality contrastive samples.
We propose a simple but effective method, Adversarial Graph Contrastive Learning (ARIEL)
It consistently outperforms the current graph contrastive learning methods in the node classification task over various real-world datasets.
arXiv Detail & Related papers (2022-02-14T05:54:48Z) - Graph Self-supervised Learning with Accurate Discrepancy Learning [64.69095775258164]
We propose a framework that aims to learn the exact discrepancy between the original and the perturbed graphs, coined as Discrepancy-based Self-supervised LeArning (D-SLA)
We validate our method on various graph-related downstream tasks, including molecular property prediction, protein function prediction, and link prediction tasks, on which our model largely outperforms relevant baselines.
arXiv Detail & Related papers (2022-02-07T08:04:59Z) - Learning Robust Representation through Graph Adversarial Contrastive
Learning [6.332560610460623]
Existing studies show that node representations generated by graph neural networks (GNNs) are vulnerable to adversarial attacks.
We propose a novel Graph Adversarial Contrastive Learning framework (GraphACL) by introducing adversarial augmentations into graph self-supervised learning.
arXiv Detail & Related papers (2022-01-31T07:07:51Z) - Unsupervised Graph Poisoning Attack via Contrastive Loss
Back-propagation [18.671374133506838]
We propose a novel unsupervised gradient-based adversarial attack that does not rely on labels for graph contrastive learning.
Our attack outperforms unsupervised baseline attacks and has comparable performance with supervised attacks in multiple downstream tasks.
arXiv Detail & Related papers (2022-01-20T03:32:21Z) - Dual Space Graph Contrastive Learning [82.81372024482202]
We propose a novel graph contrastive learning method, namely textbfDual textbfSpace textbfGraph textbfContrastive (DSGC) Learning.
Since both spaces have their own advantages to represent graph data in the embedding spaces, we hope to utilize graph contrastive learning to bridge the spaces and leverage advantages from both sides.
arXiv Detail & Related papers (2022-01-19T04:10:29Z) - Cross-view Self-Supervised Learning on Heterogeneous Graph Neural
Network via Bootstrapping [0.0]
Heterogeneous graph neural networks can represent information of heterogeneous graphs with excellent ability.
In this paper, we introduce a that can generate good representations without generating large number of pairs.
The proposed model showed state-of-the-art performance than other methods in various real world datasets.
arXiv Detail & Related papers (2022-01-10T13:36:05Z) - Self-Supervised Graph Learning with Proximity-based Views and Channel
Contrast [4.761137180081091]
Graph neural networks (GNNs) use neighborhood aggregation as a core component that results in feature smoothing among nodes in proximity.
To tackle this problem, we strengthen the graph with two additional graph views, in which nodes are directly linked to those with the most similar features or local structures.
We propose a method that aims to maximize the agreement between representations across generated views and the original graph.
arXiv Detail & Related papers (2021-06-07T15:38:36Z) - Model-Agnostic Graph Regularization for Few-Shot Learning [60.64531995451357]
We present a comprehensive study on graph embedded few-shot learning.
We introduce a graph regularization approach that allows a deeper understanding of the impact of incorporating graph information between labels.
Our approach improves the performance of strong base learners by up to 2% on Mini-ImageNet and 6.7% on ImageNet-FS.
arXiv Detail & Related papers (2021-02-14T05:28:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.