Spectral Augmentations for Graph Contrastive Learning
- URL: http://arxiv.org/abs/2302.02909v1
- Date: Mon, 6 Feb 2023 16:26:29 GMT
- Title: Spectral Augmentations for Graph Contrastive Learning
- Authors: Amur Ghose, Yingxue Zhang, Jianye Hao, Mark Coates
- Abstract summary: Contrastive learning has emerged as a premier method for learning representations with or without supervision.
Recent studies have shown its utility in graph representation learning for pre-training.
We propose a set of well-motivated graph transformation operations to provide a bank of candidates when constructing augmentations for a graph contrastive objective.
- Score: 50.149996923976836
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Contrastive learning has emerged as a premier method for learning
representations with or without supervision. Recent studies have shown its
utility in graph representation learning for pre-training. Despite successes,
the understanding of how to design effective graph augmentations that can
capture structural properties common to many different types of downstream
graphs remains incomplete. We propose a set of well-motivated graph
transformation operations derived via graph spectral analysis to provide a bank
of candidates when constructing augmentations for a graph contrastive
objective, enabling contrastive learning to capture useful structural
representation from pre-training graph datasets. We first present a spectral
graph cropping augmentation that involves filtering nodes by applying
thresholds to the eigenvalues of the leading Laplacian eigenvectors. Our second
novel augmentation reorders the graph frequency components in a structural
Laplacian-derived position graph embedding. Further, we introduce a method that
leads to improved views of local subgraphs by performing alignment via global
random walk embeddings. Our experimental results indicate consistent
improvements in out-of-domain graph data transfer compared to state-of-the-art
graph contrastive learning methods, shedding light on how to design a graph
learner that is able to learn structural properties common to diverse graph
types.
Related papers
- Knowledge Probing for Graph Representation Learning [12.960185655357495]
We propose a novel graph probing framework (GraphProbe) to investigate and interpret whether the family of graph learning methods has encoded different levels of knowledge in graph representation learning.
Based on the intrinsic properties of graphs, we design three probes to systematically investigate the graph representation learning process from different perspectives.
We construct a thorough evaluation benchmark with nine representative graph learning methods from random walk based approaches, basic graph neural networks and self-supervised graph methods, and probe them on six benchmark datasets for node classification, link prediction and graph classification.
arXiv Detail & Related papers (2024-08-07T16:27:45Z) - Multi-Scale Subgraph Contrastive Learning [9.972544118719572]
We propose a multi-scale subgraph contrastive learning architecture which is able to characterize the fine-grained semantic information.
Specifically, we generate global and local views at different scales based on subgraph sampling, and construct multiple contrastive relationships according to their semantic associations.
arXiv Detail & Related papers (2024-03-05T07:17:18Z) - Through the Dual-Prism: A Spectral Perspective on Graph Data
Augmentation for Graph Classification [71.36575018271405]
We introduce the Dual-Prism (DP) augmentation method, comprising DP-Noise and DP-Mask.
We find that keeping the low-frequency eigenvalues unchanged can preserve the critical properties at a large scale when generating augmented graphs.
arXiv Detail & Related papers (2024-01-18T12:58:53Z) - Cross-View Graph Consistency Learning for Invariant Graph
Representations [16.007232280413806]
We propose a cross-view graph consistency learning (CGCL) method that learns invariant graph representations for link prediction.
This paper empirically and experimentally demonstrates the effectiveness of the proposed CGCL method.
arXiv Detail & Related papers (2023-11-20T14:58:47Z) - Contrastive Learning for Non-Local Graphs with Multi-Resolution
Structural Views [1.4445779250002606]
We propose a novel multiview contrastive learning approach that integrates diffusion filters on graphs.
By incorporating multiple graph views as augmentations, our method captures the structural equivalence in heterophilic graphs.
arXiv Detail & Related papers (2023-08-19T17:42:02Z) - State of the Art and Potentialities of Graph-level Learning [54.68482109186052]
Graph-level learning has been applied to many tasks including comparison, regression, classification, and more.
Traditional approaches to learning a set of graphs rely on hand-crafted features, such as substructures.
Deep learning has helped graph-level learning adapt to the growing scale of graphs by extracting features automatically and encoding graphs into low-dimensional representations.
arXiv Detail & Related papers (2023-01-14T09:15:49Z) - Graph Self-supervised Learning with Accurate Discrepancy Learning [64.69095775258164]
We propose a framework that aims to learn the exact discrepancy between the original and the perturbed graphs, coined as Discrepancy-based Self-supervised LeArning (D-SLA)
We validate our method on various graph-related downstream tasks, including molecular property prediction, protein function prediction, and link prediction tasks, on which our model largely outperforms relevant baselines.
arXiv Detail & Related papers (2022-02-07T08:04:59Z) - Dual Space Graph Contrastive Learning [82.81372024482202]
We propose a novel graph contrastive learning method, namely textbfDual textbfSpace textbfGraph textbfContrastive (DSGC) Learning.
Since both spaces have their own advantages to represent graph data in the embedding spaces, we hope to utilize graph contrastive learning to bridge the spaces and leverage advantages from both sides.
arXiv Detail & Related papers (2022-01-19T04:10:29Z) - GraphOpt: Learning Optimization Models of Graph Formation [72.75384705298303]
We propose an end-to-end framework that learns an implicit model of graph structure formation and discovers an underlying optimization mechanism.
The learned objective can serve as an explanation for the observed graph properties, thereby lending itself to transfer across different graphs within a domain.
GraphOpt poses link formation in graphs as a sequential decision-making process and solves it using maximum entropy inverse reinforcement learning algorithm.
arXiv Detail & Related papers (2020-07-07T16:51:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.