Why Does Dropping Edges Usually Outperform Adding Edges in Graph Contrastive Learning?
- URL: http://arxiv.org/abs/2412.08128v4
- Date: Wed, 08 Jan 2025 06:35:45 GMT
- Title: Why Does Dropping Edges Usually Outperform Adding Edges in Graph Contrastive Learning?
- Authors: Yanchen Xu, Siqi Huang, Hongyuan Zhang, Xuelong Li,
- Abstract summary: We introduce a new metric, namely Error Passing Rate (EPR), to quantify how a graph fits the network.
Inspired by the theoretical conclusions and the idea of positive-incentive noise, we propose a novel GCL algorithm, Error-PAssing-based Graph Contrastive Learning (EPAGCL)
We generate views by adding and dropping edges based on the weights derived from EPR.
- Score: 54.44813218411879
- License:
- Abstract: Graph contrastive learning (GCL) has been widely used as an effective self-supervised learning method for graph representation learning. However, how to apply adequate and stable graph augmentation to generating proper views for contrastive learning remains an essential problem. Dropping edges is a primary augmentation in GCL while adding edges is not a common method due to its unstable performance. To our best knowledge, there is no theoretical analysis to study why dropping edges usually outperforms adding edges. To answer this question, we introduce a new metric, namely Error Passing Rate (EPR), to quantify how a graph fits the network. Inspired by the theoretical conclusions and the idea of positive-incentive noise, we propose a novel GCL algorithm, Error-PAssing-based Graph Contrastive Learning (EPAGCL), which uses both edge adding and edge dropping as its augmentations. To be specific, we generate views by adding and dropping edges based on the weights derived from EPR. Extensive experiments on various real-world datasets are conducted to validate the correctness of our theoretical analysis and the effectiveness of our proposed algorithm. Our code is available at: https://github.com/hyzhang98/EPAGCL.
Related papers
- Edge Contrastive Learning: An Augmentation-Free Graph Contrastive Learning Model [18.02317423788033]
Graph contrastive learning (GCL) aims to learn representations from unlabeled graph data in a self-supervised manner.
One of the primary obstacles of edge-based GCL is the heavy burden.
We propose AugmentationFree Edge Contrastive Learning (AFECL) to achieve edgeedge contrast.
arXiv Detail & Related papers (2024-12-15T06:16:01Z) - ADEdgeDrop: Adversarial Edge Dropping for Robust Graph Neural Networks [53.41164429486268]
Graph Neural Networks (GNNs) have exhibited the powerful ability to gather graph-structured information from neighborhood nodes.
The performance of GNNs is limited by poor generalization and fragile robustness caused by noisy and redundant graph data.
We propose a novel adversarial edge-dropping method (ADEdgeDrop) that leverages an adversarial edge predictor guiding the removal of edges.
arXiv Detail & Related papers (2024-03-14T08:31:39Z) - Adversarial Learning Data Augmentation for Graph Contrastive Learning in
Recommendation [56.10351068286499]
We propose Learnable Data Augmentation for Graph Contrastive Learning (LDA-GCL)
Our methods include data augmentation learning and graph contrastive learning, which follow the InfoMin and InfoMax principles, respectively.
In implementation, our methods optimize the adversarial loss function to learn data augmentation and effective representations of users and items.
arXiv Detail & Related papers (2023-02-05T06:55:51Z) - Are All Edges Necessary? A Unified Framework for Graph Purification [6.795209119198288]
Not all edges in a graph are necessary for the training of machine learning models.
In this paper, we try to provide a method to drop edges in order to purify the graph data from a new perspective.
arXiv Detail & Related papers (2022-11-09T20:28:25Z) - Graph Contrastive Learning with Implicit Augmentations [36.57536688367965]
Implicit Graph Contrastive Learning (iGCL) uses augmentations in latent space learned from a Variational Graph Auto-Encoder by reconstructing graph topological structure.
Experimental results on both graph-level and node-level tasks show that the proposed method achieves state-of-the-art performance.
arXiv Detail & Related papers (2022-11-07T17:34:07Z) - Reinforced Causal Explainer for Graph Neural Networks [112.57265240212001]
Explainability is crucial for probing graph neural networks (GNNs)
We propose a reinforcement learning agent, Reinforced Causal Explainer (RC-Explainer)
RC-Explainer generates faithful and concise explanations, and has a better power to unseen graphs.
arXiv Detail & Related papers (2022-04-23T09:13:25Z) - Training Robust Graph Neural Networks with Topology Adaptive Edge
Dropping [116.26579152942162]
Graph neural networks (GNNs) are processing architectures that exploit graph structural information to model representations from network data.
Despite their success, GNNs suffer from sub-optimal generalization performance given limited training data.
This paper proposes Topology Adaptive Edge Dropping to improve generalization performance and learn robust GNN models.
arXiv Detail & Related papers (2021-06-05T13:20:36Z) - Unsupervised Graph Embedding via Adaptive Graph Learning [85.28555417981063]
Graph autoencoders (GAEs) are powerful tools in representation learning for graph embedding.
In this paper, two novel unsupervised graph embedding methods, unsupervised graph embedding via adaptive graph learning (BAGE) and unsupervised graph embedding via variational adaptive graph learning (VBAGE) are proposed.
Experimental studies on several datasets validate our design and demonstrate that our methods outperform baselines by a wide margin in node clustering, node classification, and graph visualization tasks.
arXiv Detail & Related papers (2020-03-10T02:33:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.