Simple and Asymmetric Graph Contrastive Learning without Augmentations
- URL: http://arxiv.org/abs/2310.18884v3
- Date: Sat, 24 Feb 2024 18:08:28 GMT
- Title: Simple and Asymmetric Graph Contrastive Learning without Augmentations
- Authors: Teng Xiao, Huaisheng Zhu, Zhengyu Chen, Suhang Wang
- Abstract summary: Asymmetric Contrastive Learning for Graphs (GraphACL) is easy to implement and does not rely on graph augmentations and homophily assumptions.
Experimental results show that the simple GraphACL significantly outperforms state-of-the-art graph contrastive learning and self-supervised learning methods on homophilic and heterophilic graphs.
- Score: 39.301072710063636
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Contrastive Learning (GCL) has shown superior performance in
representation learning in graph-structured data. Despite their success, most
existing GCL methods rely on prefabricated graph augmentation and homophily
assumptions. Thus, they fail to generalize well to heterophilic graphs where
connected nodes may have different class labels and dissimilar features. In
this paper, we study the problem of conducting contrastive learning on
homophilic and heterophilic graphs. We find that we can achieve promising
performance simply by considering an asymmetric view of the neighboring nodes.
The resulting simple algorithm, Asymmetric Contrastive Learning for Graphs
(GraphACL), is easy to implement and does not rely on graph augmentations and
homophily assumptions. We provide theoretical and empirical evidence that
GraphACL can capture one-hop local neighborhood information and two-hop
monophily similarity, which are both important for modeling heterophilic
graphs. Experimental results show that the simple GraphACL significantly
outperforms state-of-the-art graph contrastive learning and self-supervised
learning methods on homophilic and heterophilic graphs. The code of GraphACL is
available at https://github.com/tengxiao1/GraphACL.
Related papers
- Robust Graph Structure Learning under Heterophily [12.557639223778722]
We propose a novel robust graph structure learning method to achieve a high-quality graph from heterophilic data for downstream tasks.
We first apply a high-pass filter to make each node more distinctive from its neighbors by encoding structure information into the node features.
Then, we learn a robust graph with an adaptive norm characterizing different levels of noise.
arXiv Detail & Related papers (2024-03-06T12:29:13Z) - HomoGCL: Rethinking Homophily in Graph Contrastive Learning [64.85392028383164]
HomoGCL is a model-agnostic framework to expand the positive set using neighbor nodes with neighbor-specific significances.
We show that HomoGCL yields multiple state-of-the-art results across six public datasets.
arXiv Detail & Related papers (2023-06-16T04:06:52Z) - Graph Contrastive Learning under Heterophily via Graph Filters [51.46061703680498]
Graph contrastive learning (CL) methods learn node representations in a self-supervised manner by maximizing the similarity between the augmented node representations obtained via a GNN-based encoder.
In this work, we propose an effective graph CL method, namely HLCL, for learning graph representations under heterophily.
Our extensive experiments show that HLCL outperforms state-of-the-art graph CL methods on benchmark datasets with heterophily, as well as large-scale real-world graphs, by up to 7%, and outperforms graph supervised learning methods on datasets with heterophily by up to 10%.
arXiv Detail & Related papers (2023-03-11T08:32:39Z) - Localized Contrastive Learning on Graphs [110.54606263711385]
We introduce a simple yet effective contrastive model named Localized Graph Contrastive Learning (Local-GCL)
In spite of its simplicity, Local-GCL achieves quite competitive performance in self-supervised node representation learning tasks on graphs with various scales and properties.
arXiv Detail & Related papers (2022-12-08T23:36:00Z) - Single-Pass Contrastive Learning Can Work for Both Homophilic and
Heterophilic Graph [60.28340453547902]
Graph contrastive learning (GCL) techniques typically require two forward passes for a single instance to construct the contrastive loss.
Existing GCL approaches fail to provide strong performance guarantees.
We implement the Single-Pass Graph Contrastive Learning method (SP-GCL)
Empirically, the features learned by the SP-GCL can match or outperform existing strong baselines with significantly less computational overhead.
arXiv Detail & Related papers (2022-11-20T07:18:56Z) - Graph Soft-Contrastive Learning via Neighborhood Ranking [19.241089079154044]
Graph Contrastive Learning (GCL) has emerged as a promising approach in the realm of graph self-supervised learning.
We propose a novel paradigm, Graph Soft-Contrastive Learning (GSCL)
GSCL facilitates GCL via neighborhood ranking, avoiding the need to specify absolutely similar pairs.
arXiv Detail & Related papers (2022-09-28T09:52:15Z) - Demystifying Graph Convolution with a Simple Concatenation [6.542119695695405]
We quantify the information overlap between graph topology, node features, and labels.
We show that graph concatenation is a simple but more flexible alternative to graph convolution.
arXiv Detail & Related papers (2022-07-18T16:39:33Z) - Unsupervised Graph Embedding via Adaptive Graph Learning [85.28555417981063]
Graph autoencoders (GAEs) are powerful tools in representation learning for graph embedding.
In this paper, two novel unsupervised graph embedding methods, unsupervised graph embedding via adaptive graph learning (BAGE) and unsupervised graph embedding via variational adaptive graph learning (VBAGE) are proposed.
Experimental studies on several datasets validate our design and demonstrate that our methods outperform baselines by a wide margin in node clustering, node classification, and graph visualization tasks.
arXiv Detail & Related papers (2020-03-10T02:33:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.