Cross-View Graph Consistency Learning for Invariant Graph
Representations
- URL: http://arxiv.org/abs/2311.11821v1
- Date: Mon, 20 Nov 2023 14:58:47 GMT
- Title: Cross-View Graph Consistency Learning for Invariant Graph
Representations
- Authors: Jie Chen and Zhiming Li and Hua Mao and Wai Lok Woo and Xi Peng
- Abstract summary: We propose a cross-view graph consistency learning (CGCL) method that learns invariant graph representations for link prediction.
This paper empirically and experimentally demonstrates the effectiveness of the proposed CGCL method.
- Score: 16.007232280413806
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph representation learning is fundamental for analyzing graph-structured
data. Exploring invariant graph representations remains a challenge for most
existing graph representation learning methods. In this paper, we propose a
cross-view graph consistency learning (CGCL) method that learns invariant graph
representations for link prediction. First, two complementary augmented views
are derived from an incomplete graph structure through a bidirectional graph
structure augmentation scheme. This augmentation scheme mitigates the potential
information loss that is commonly associated with various data augmentation
techniques involving raw graph data, such as edge perturbation, node removal,
and attribute masking. Second, we propose a CGCL model that can learn invariant
graph representations. A cross-view training scheme is proposed to train the
proposed CGCL model. This scheme attempts to maximize the consistency
information between one augmented view and the graph structure reconstructed
from the other augmented view. Furthermore, we offer a comprehensive
theoretical CGCL analysis. This paper empirically and experimentally
demonstrates the effectiveness of the proposed CGCL method, achieving
competitive results on graph datasets in comparisons with several
state-of-the-art algorithms.
Related papers
- Dual-Optimized Adaptive Graph Reconstruction for Multi-View Graph Clustering [19.419832637206138]
We propose a novel multi-view graph clustering method based on dual-optimized adaptive graph reconstruction, named DOAGC.
It mainly aims to reconstruct the graph structure adapted to traditional GNNs to deal with heterophilous graph issues while maintaining the advantages of traditional GNNs.
arXiv Detail & Related papers (2024-10-30T12:50:21Z) - Spectral Augmentations for Graph Contrastive Learning [50.149996923976836]
Contrastive learning has emerged as a premier method for learning representations with or without supervision.
Recent studies have shown its utility in graph representation learning for pre-training.
We propose a set of well-motivated graph transformation operations to provide a bank of candidates when constructing augmentations for a graph contrastive objective.
arXiv Detail & Related papers (2023-02-06T16:26:29Z) - GraphCoCo: Graph Complementary Contrastive Learning [65.89743197355722]
Graph Contrastive Learning (GCL) has shown promising performance in graph representation learning (GRL) without the supervision of manual annotations.
This paper proposes an effective graph complementary contrastive learning approach named GraphCoCo to tackle the above issue.
arXiv Detail & Related papers (2022-03-24T02:58:36Z) - Towards Graph Self-Supervised Learning with Contrastive Adjusted Zooming [48.99614465020678]
We introduce a novel self-supervised graph representation learning algorithm via Graph Contrastive Adjusted Zooming.
This mechanism enables G-Zoom to explore and extract self-supervision signals from a graph from multiple scales.
We have conducted extensive experiments on real-world datasets, and the results demonstrate that our proposed model outperforms state-of-the-art methods consistently.
arXiv Detail & Related papers (2021-11-20T22:45:53Z) - Edge but not Least: Cross-View Graph Pooling [76.71497833616024]
This paper presents a cross-view graph pooling (Co-Pooling) method to better exploit crucial graph structure information.
Through cross-view interaction, edge-view pooling and node-view pooling seamlessly reinforce each other to learn more informative graph-level representations.
arXiv Detail & Related papers (2021-09-24T08:01:23Z) - Multiple Graph Learning for Scalable Multi-view Clustering [26.846642220480863]
We propose an efficient multiple graph learning model via a small number of anchor points and tensor Schatten p-norm minimization.
Specifically, we construct a hidden and tractable large graph by anchor graph for each view.
We develop an efficient algorithm, which scales linearly with the data size, to solve our proposed model.
arXiv Detail & Related papers (2021-06-29T13:10:56Z) - Diversified Multiscale Graph Learning with Graph Self-Correction [55.43696999424127]
We propose a diversified multiscale graph learning model equipped with two core ingredients.
A graph self-correction (GSC) mechanism to generate informative embedded graphs, and a diversity boosting regularizer (DBR) to achieve a comprehensive characterization of the input graph.
Experiments on popular graph classification benchmarks show that the proposed GSC mechanism leads to significant improvements over state-of-the-art graph pooling methods.
arXiv Detail & Related papers (2021-03-17T16:22:24Z) - Graph Contrastive Learning with Augmentations [109.23158429991298]
We propose a graph contrastive learning (GraphCL) framework for learning unsupervised representations of graph data.
We show that our framework can produce graph representations of similar or better generalizability, transferrability, and robustness compared to state-of-the-art methods.
arXiv Detail & Related papers (2020-10-22T20:13:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.