Uncovering Capabilities of Model Pruning in Graph Contrastive Learning
- URL: http://arxiv.org/abs/2410.20356v1
- Date: Sun, 27 Oct 2024 07:09:31 GMT
- Title: Uncovering Capabilities of Model Pruning in Graph Contrastive Learning
- Authors: Wu Junran, Chen Xueyuan, Li Shangzhe,
- Abstract summary: We reformulate the problem of graph contrastive learning via contrasting different model versions rather than augmented views.
We extensively validate our method on various benchmarks regarding graph classification via unsupervised and transfer learning.
- Score: 0.0
- License:
- Abstract: Graph contrastive learning has achieved great success in pre-training graph neural networks without ground-truth labels. Leading graph contrastive learning follows the classical scheme of contrastive learning, forcing model to identify the essential information from augmented views. However, general augmented views are produced via random corruption or learning, which inevitably leads to semantics alteration. Although domain knowledge guided augmentations alleviate this issue, the generated views are domain specific and undermine the generalization. In this work, motivated by the firm representation ability of sparse model from pruning, we reformulate the problem of graph contrastive learning via contrasting different model versions rather than augmented views. We first theoretically reveal the superiority of model pruning in contrast to data augmentations. In practice, we take original graph as input and dynamically generate a perturbed graph encoder to contrast with the original encoder by pruning its transformation weights. Furthermore, considering the integrity of node embedding in our method, we are capable of developing a local contrastive loss to tackle the hard negative samples that disturb the model training. We extensively validate our method on various benchmarks regarding graph classification via unsupervised and transfer learning. Compared to the state-of-the-art (SOTA) works, better performance can always be obtained by the proposed method.
Related papers
- GPS: Graph Contrastive Learning via Multi-scale Augmented Views from
Adversarial Pooling [23.450755275125577]
Self-supervised graph representation learning has recently shown considerable promise in a range of fields, including bioinformatics and social networks.
We present a novel approach named Graph Pooling ContraSt (GPS) to address these issues.
Motivated by the fact that graph pooling can adaptively coarsen the graph with the removal of redundancy, we rethink graph pooling and leverage it to automatically generate multi-scale positive views.
arXiv Detail & Related papers (2024-01-29T10:00:53Z) - Spectral Augmentations for Graph Contrastive Learning [50.149996923976836]
Contrastive learning has emerged as a premier method for learning representations with or without supervision.
Recent studies have shown its utility in graph representation learning for pre-training.
We propose a set of well-motivated graph transformation operations to provide a bank of candidates when constructing augmentations for a graph contrastive objective.
arXiv Detail & Related papers (2023-02-06T16:26:29Z) - Coarse-to-Fine Contrastive Learning on Graphs [38.41992365090377]
A variety of graph augmentation strategies have been employed to learn node representations in a self-supervised manner.
We introduce a self-ranking paradigm to ensure that the discriminative information among different nodes can be maintained.
Experiment results on various benchmark datasets verify the effectiveness of our algorithm.
arXiv Detail & Related papers (2022-12-13T08:17:20Z) - ARIEL: Adversarial Graph Contrastive Learning [51.14695794459399]
ARIEL consistently outperforms the current graph contrastive learning methods for both node-level and graph-level classification tasks.
ARIEL is more robust in the face of adversarial attacks.
arXiv Detail & Related papers (2022-08-15T01:24:42Z) - Latent Augmentation For Better Graph Self-Supervised Learning [20.082614919182692]
We argue that predictive models weaponed with latent augmentations and powerful decoder could achieve comparable or even better representation power than contrastive models.
A novel graph decoder named Wiener Graph Deconvolutional Network is correspondingly designed to perform information reconstruction from augmented latent representations.
arXiv Detail & Related papers (2022-06-26T17:41:59Z) - Similarity-aware Positive Instance Sampling for Graph Contrastive
Pre-training [82.68805025636165]
We propose to select positive graph instances directly from existing graphs in the training set.
Our selection is based on certain domain-specific pair-wise similarity measurements.
Besides, we develop an adaptive node-level pre-training method to dynamically mask nodes to distribute them evenly in the graph.
arXiv Detail & Related papers (2022-06-23T20:12:51Z) - Adversarial Graph Contrastive Learning with Information Regularization [51.14695794459399]
Contrastive learning is an effective method in graph representation learning.
Data augmentation on graphs is far less intuitive and much harder to provide high-quality contrastive samples.
We propose a simple but effective method, Adversarial Graph Contrastive Learning (ARIEL)
It consistently outperforms the current graph contrastive learning methods in the node classification task over various real-world datasets.
arXiv Detail & Related papers (2022-02-14T05:54:48Z) - Graph Self-supervised Learning with Accurate Discrepancy Learning [64.69095775258164]
We propose a framework that aims to learn the exact discrepancy between the original and the perturbed graphs, coined as Discrepancy-based Self-supervised LeArning (D-SLA)
We validate our method on various graph-related downstream tasks, including molecular property prediction, protein function prediction, and link prediction tasks, on which our model largely outperforms relevant baselines.
arXiv Detail & Related papers (2022-02-07T08:04:59Z) - Bayesian Graph Contrastive Learning [55.36652660268726]
We propose a novel perspective of graph contrastive learning methods showing random augmentations leads to encoders.
Our proposed method represents each node by a distribution in the latent space in contrast to existing techniques which embed each node to a deterministic vector.
We show a considerable improvement in performance compared to existing state-of-the-art methods on several benchmark datasets.
arXiv Detail & Related papers (2021-12-15T01:45:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.