Uncovering the Structural Fairness in Graph Contrastive Learning
- URL: http://arxiv.org/abs/2210.03011v1
- Date: Thu, 6 Oct 2022 15:58:25 GMT
- Title: Uncovering the Structural Fairness in Graph Contrastive Learning
- Authors: Ruijia Wang, Xiao Wang, Chuan Shi, Le Song
- Abstract summary: Graph contrastive learning (GCL) has emerged as a promising self-supervised approach for learning node representations.
We show that representations obtained by GCL methods are already fairer to degree bias than those learned by GCN.
We devise a novel graph augmentation method, called GRAph contrastive learning for DEgree bias (GRADE), which applies different strategies to low- and high-degree nodes.
- Score: 87.65091052291544
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent studies show that graph convolutional network (GCN) often performs
worse for low-degree nodes, exhibiting the so-called structural unfairness for
graphs with long-tailed degree distributions prevalent in the real world. Graph
contrastive learning (GCL), which marries the power of GCN and contrastive
learning, has emerged as a promising self-supervised approach for learning node
representations. How does GCL behave in terms of structural fairness?
Surprisingly, we find that representations obtained by GCL methods are already
fairer to degree bias than those learned by GCN. We theoretically show that
this fairness stems from intra-community concentration and inter-community
scatter properties of GCL, resulting in a much clear community structure to
drive low-degree nodes away from the community boundary. Based on our
theoretical analysis, we further devise a novel graph augmentation method,
called GRAph contrastive learning for DEgree bias (GRADE), which applies
different strategies to low- and high-degree nodes. Extensive experiments on
various benchmarks and evaluation protocols validate the effectiveness of the
proposed method.
Related papers
- Self-Supervised Conditional Distribution Learning on Graphs [15.730933577970687]
We present an end-to-end graph representation learning model to align the conditional distributions of weakly and strongly augmented features over the original features.
This alignment effectively reduces the risk of disrupting intrinsic semantic information through graph-structured data augmentation.
arXiv Detail & Related papers (2024-11-20T07:26:36Z) - Graph-level Protein Representation Learning by Structure Knowledge
Refinement [50.775264276189695]
This paper focuses on learning representation on the whole graph level in an unsupervised manner.
We propose a novel framework called Structure Knowledge Refinement (SKR) which uses data structure to determine the probability of whether a pair is positive or negative.
arXiv Detail & Related papers (2024-01-05T09:05:33Z) - Hierarchical Topology Isomorphism Expertise Embedded Graph Contrastive
Learning [37.0788516033498]
We propose a novel hierarchical topology isomorphism expertise embedded graph contrastive learning.
We empirically demonstrate that the proposed method is universal to multiple state-of-the-art GCL models.
Our method beats the state-of-the-art method by 0.23% on unsupervised representation learning setting.
arXiv Detail & Related papers (2023-12-21T14:07:46Z) - Localized Contrastive Learning on Graphs [110.54606263711385]
We introduce a simple yet effective contrastive model named Localized Graph Contrastive Learning (Local-GCL)
In spite of its simplicity, Local-GCL achieves quite competitive performance in self-supervised node representation learning tasks on graphs with various scales and properties.
arXiv Detail & Related papers (2022-12-08T23:36:00Z) - Generalization Guarantee of Training Graph Convolutional Networks with
Graph Topology Sampling [83.77955213766896]
Graph convolutional networks (GCNs) have recently achieved great empirical success in learning graphstructured data.
To address its scalability issue, graph topology sampling has been proposed to reduce the memory and computational cost of training Gs.
This paper provides first theoretical justification of graph topology sampling in training (up to) three-layer GCNs.
arXiv Detail & Related papers (2022-07-07T21:25:55Z) - ImGCL: Revisiting Graph Contrastive Learning on Imbalanced Node
Classification [26.0350727426613]
Graph contrastive learning (GCL) has attracted a surge of attention due to its superior performance for learning node/graph representations without labels.
In practice, the underlying class distribution of unlabeled nodes for the given graph is usually imbalanced.
We propose a principled GCL framework on Imbalanced node classification (ImGCL), which automatically and adaptively balances the representations learned from GCL without labels.
arXiv Detail & Related papers (2022-05-23T14:23:36Z) - RawlsGCN: Towards Rawlsian Difference Principle on Graph Convolutional
Network [102.27090022283208]
Graph Convolutional Network (GCN) plays pivotal roles in many real-world applications.
GCN often exhibits performance disparity with respect to node degrees, resulting in worse predictive accuracy for low-degree nodes.
We formulate the problem of mitigating the degree-related performance disparity in GCN from the perspective of the Rawlsian difference principle.
arXiv Detail & Related papers (2022-02-28T05:07:57Z) - Structural and Semantic Contrastive Learning for Self-supervised Node
Representation Learning [32.126228702554144]
Graph Contrastive Learning (GCL) has drawn much research interest for learning generalizable, transferable, and robust node representations in a self-supervised fashion.
In this work, we go beyond the existing unsupervised GCL counterparts and address their limitations by proposing a simple yet effective framework S$3$-CL.
Our experiments demonstrate that the node representations learned by S$3$-CL achieve superior performance on different downstream tasks compared to the state-of-the-art GCL methods.
arXiv Detail & Related papers (2022-02-17T07:20:09Z) - Towards Unsupervised Deep Graph Structure Learning [67.58720734177325]
We propose an unsupervised graph structure learning paradigm, where the learned graph topology is optimized by data itself without any external guidance.
Specifically, we generate a learning target from the original data as an "anchor graph", and use a contrastive loss to maximize the agreement between the anchor graph and the learned graph.
arXiv Detail & Related papers (2022-01-17T11:57:29Z) - Knowledge Embedding Based Graph Convolutional Network [35.35776808660919]
This paper proposes a novel framework, namely the Knowledge Embedding based Graph Convolutional Network (KE-GCN)
KE-GCN combines the power of Graph Convolutional Network (GCN) in graph-based belief propagation and the strengths of advanced knowledge embedding methods.
Our theoretical analysis shows that KE-GCN offers an elegant unification of several well-known GCN methods as specific cases.
arXiv Detail & Related papers (2020-06-12T17:12:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.