Signed Directed Graph Contrastive Learning with Laplacian Augmentation
- URL: http://arxiv.org/abs/2301.05163v1
- Date: Thu, 12 Jan 2023 17:32:19 GMT
- Title: Signed Directed Graph Contrastive Learning with Laplacian Augmentation
- Authors: Taewook Ko, Yoonhyuk Choi, Chong-Kwon Kim
- Abstract summary: Graph contrastive learning has become a powerful technique for several graph mining tasks.
This paper proposes a novel signed-directed graph contrastive learning, SDGCL.
It makes two different structurally perturbed graph views and gets node representations via magnetic Laplacian perturbation.
- Score: 1.3535770763481905
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph contrastive learning has become a powerful technique for several graph
mining tasks. It learns discriminative representation from different
perspectives of augmented graphs. Ubiquitous in our daily life, singed-directed
graphs are the most complex and tricky to analyze among various graph types.
That is why singed-directed graph contrastive learning has not been studied
much yet, while there are many contrastive studies for unsigned and undirected.
Thus, this paper proposes a novel signed-directed graph contrastive learning,
SDGCL. It makes two different structurally perturbed graph views and gets node
representations via magnetic Laplacian perturbation. We use a node-level
contrastive loss to maximize the mutual information between the two graph
views. The model is jointly learned with contrastive and supervised objectives.
The graph encoder of SDGCL does not depend on social theories or predefined
assumptions. Therefore it does not require finding triads or selecting
neighbors to aggregate. It leverages only the edge signs and directions via
magnetic Laplacian. To the best of our knowledge, it is the first to introduce
magnetic Laplacian perturbation and signed spectral graph contrastive learning.
The superiority of the proposed model is demonstrated through exhaustive
experiments on four real-world datasets. SDGCL shows better performance than
other state-of-the-art on four evaluation metrics.
Related papers
- Robust Graph Structure Learning under Heterophily [12.557639223778722]
We propose a novel robust graph structure learning method to achieve a high-quality graph from heterophilic data for downstream tasks.
We first apply a high-pass filter to make each node more distinctive from its neighbors by encoding structure information into the node features.
Then, we learn a robust graph with an adaptive norm characterizing different levels of noise.
arXiv Detail & Related papers (2024-03-06T12:29:13Z) - Simple and Asymmetric Graph Contrastive Learning without Augmentations [39.301072710063636]
Asymmetric Contrastive Learning for Graphs (GraphACL) is easy to implement and does not rely on graph augmentations and homophily assumptions.
Experimental results show that the simple GraphACL significantly outperforms state-of-the-art graph contrastive learning and self-supervised learning methods on homophilic and heterophilic graphs.
arXiv Detail & Related papers (2023-10-29T03:14:20Z) - Spectral Augmentations for Graph Contrastive Learning [50.149996923976836]
Contrastive learning has emerged as a premier method for learning representations with or without supervision.
Recent studies have shown its utility in graph representation learning for pre-training.
We propose a set of well-motivated graph transformation operations to provide a bank of candidates when constructing augmentations for a graph contrastive objective.
arXiv Detail & Related papers (2023-02-06T16:26:29Z) - ARIEL: Adversarial Graph Contrastive Learning [51.14695794459399]
ARIEL consistently outperforms the current graph contrastive learning methods for both node-level and graph-level classification tasks.
ARIEL is more robust in the face of adversarial attacks.
arXiv Detail & Related papers (2022-08-15T01:24:42Z) - Adversarial Graph Contrastive Learning with Information Regularization [51.14695794459399]
Contrastive learning is an effective method in graph representation learning.
Data augmentation on graphs is far less intuitive and much harder to provide high-quality contrastive samples.
We propose a simple but effective method, Adversarial Graph Contrastive Learning (ARIEL)
It consistently outperforms the current graph contrastive learning methods in the node classification task over various real-world datasets.
arXiv Detail & Related papers (2022-02-14T05:54:48Z) - Graph Self-supervised Learning with Accurate Discrepancy Learning [64.69095775258164]
We propose a framework that aims to learn the exact discrepancy between the original and the perturbed graphs, coined as Discrepancy-based Self-supervised LeArning (D-SLA)
We validate our method on various graph-related downstream tasks, including molecular property prediction, protein function prediction, and link prediction tasks, on which our model largely outperforms relevant baselines.
arXiv Detail & Related papers (2022-02-07T08:04:59Z) - Dual Space Graph Contrastive Learning [82.81372024482202]
We propose a novel graph contrastive learning method, namely textbfDual textbfSpace textbfGraph textbfContrastive (DSGC) Learning.
Since both spaces have their own advantages to represent graph data in the embedding spaces, we hope to utilize graph contrastive learning to bridge the spaces and leverage advantages from both sides.
arXiv Detail & Related papers (2022-01-19T04:10:29Z) - CGCL: Collaborative Graph Contrastive Learning without Handcrafted Graph Data Augmentations [12.820228374977441]
We propose a novel Collaborative Graph Contrastive Learning framework (CGCL)
This framework harnesses multiple graph encoders to observe the graph.
To ensure the collaboration among diverse graph encoders, we propose the concepts of asymmetric architecture and complementary encoders.
arXiv Detail & Related papers (2021-11-05T05:08:27Z) - Multi-Level Graph Contrastive Learning [38.022118893733804]
We propose a Multi-Level Graph Contrastive Learning (MLGCL) framework for learning robust representation of graph data by contrasting space views of graphs.
The original graph is first-order approximation structure and contains uncertainty or error, while the $k$NN graph generated by encoding features preserves high-order proximity.
Extensive experiments indicate MLGCL achieves promising results compared with the existing state-of-the-art graph representation learning methods on seven datasets.
arXiv Detail & Related papers (2021-07-06T14:24:43Z) - Graph Contrastive Learning with Augmentations [109.23158429991298]
We propose a graph contrastive learning (GraphCL) framework for learning unsupervised representations of graph data.
We show that our framework can produce graph representations of similar or better generalizability, transferrability, and robustness compared to state-of-the-art methods.
arXiv Detail & Related papers (2020-10-22T20:13:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.