Understanding Community Bias Amplification in Graph Representation
Learning
- URL: http://arxiv.org/abs/2312.04883v1
- Date: Fri, 8 Dec 2023 07:43:05 GMT
- Title: Understanding Community Bias Amplification in Graph Representation
Learning
- Authors: Shengzhong Zhang, Wenjie Yang, Yimin Zhang, Hongwei Zhang, Divin Yan,
Zengfeng Huang
- Abstract summary: We study a phenomenon of community bias amplification in graph representation learning.
We propose a novel graph contrastive learning model called Random Graph Coarsening Contrastive Learning.
- Score: 22.522798932536038
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we discover a phenomenon of community bias amplification in
graph representation learning, which refers to the exacerbation of performance
bias between different classes by graph representation learning. We conduct an
in-depth theoretical study of this phenomenon from a novel spectral
perspective. Our analysis suggests that structural bias between communities
results in varying local convergence speeds for node embeddings. This
phenomenon leads to bias amplification in the classification results of
downstream tasks. Based on the theoretical insights, we propose random graph
coarsening, which is proved to be effective in dealing with the above issue.
Finally, we propose a novel graph contrastive learning model called Random
Graph Coarsening Contrastive Learning (RGCCL), which utilizes random coarsening
as data augmentation and mitigates community bias by contrasting the coarsened
graph with the original graph. Extensive experiments on various datasets
demonstrate the advantage of our method when dealing with community bias
amplification.
Related papers
- FairWire: Fair Graph Generation [18.6649050946022]
This work focuses on the analysis and mitigation of structural bias for both real and synthetic graphs.
To alleviate the identified bias factors, we design a novel fairness regularizer that offers a versatile use.
We propose a fair graph generation framework, FairWire, by leveraging our fair regularizer design in a generative model.
arXiv Detail & Related papers (2024-02-06T20:43:00Z) - Graph Out-of-Distribution Generalization with Controllable Data
Augmentation [51.17476258673232]
Graph Neural Network (GNN) has demonstrated extraordinary performance in classifying graph properties.
Due to the selection bias of training and testing data, distribution deviation is widespread.
We propose OOD calibration to measure the distribution deviation of virtual samples.
arXiv Detail & Related papers (2023-08-16T13:10:27Z) - Spectral Augmentations for Graph Contrastive Learning [50.149996923976836]
Contrastive learning has emerged as a premier method for learning representations with or without supervision.
Recent studies have shown its utility in graph representation learning for pre-training.
We propose a set of well-motivated graph transformation operations to provide a bank of candidates when constructing augmentations for a graph contrastive objective.
arXiv Detail & Related papers (2023-02-06T16:26:29Z) - Robust Causal Graph Representation Learning against Confounding Effects [21.380907101361643]
We propose Robust Causal Graph Representation Learning (RCGRL) to learn robust graph representations against confounding effects.
RCGRL introduces an active approach to generate instrumental variables under unconditional moment restrictions, which empowers the graph representation learning model to eliminate confounders.
arXiv Detail & Related papers (2022-08-18T01:31:25Z) - Beyond spectral gap: The role of the topology in decentralized learning [58.48291921602417]
In data-parallel optimization of machine learning models, workers collaborate to improve their estimates of the model.
This paper aims to paint an accurate picture of sparsely-connected distributed optimization when workers share the same data distribution.
Our theory matches empirical observations in deep learning, and accurately describes the relative merits of different graph topologies.
arXiv Detail & Related papers (2022-06-07T08:19:06Z) - Fair Node Representation Learning via Adaptive Data Augmentation [9.492903649862761]
This work theoretically explains the sources of bias in node representations obtained via Graph Neural Networks (GNNs)
Building upon the analysis, fairness-aware data augmentation frameworks are developed to reduce the intrinsic bias.
Our analysis and proposed schemes can be readily employed to enhance the fairness of various GNN-based learning mechanisms.
arXiv Detail & Related papers (2022-01-21T05:49:15Z) - Graph-wise Common Latent Factor Extraction for Unsupervised Graph
Representation Learning [40.70562886682939]
We propose a new principle for unsupervised graph representation learning: Graph-wise Common latent Factor EXtraction (GCFX)
GCFX explicitly extract common latent factors from an input graph and achieve improved results on downstream tasks to the current state-of-the-art.
Through extensive experiments and analysis, we demonstrate that GCFX is beneficial for graph-level tasks to alleviate distractions caused by local variations of individual nodes or local neighbourhoods.
arXiv Detail & Related papers (2021-12-16T12:22:49Z) - Unbiased Graph Embedding with Biased Graph Observations [52.82841737832561]
We propose a principled new way for obtaining unbiased representations by learning from an underlying bias-free graph.
Based on this new perspective, we propose two complementary methods for uncovering such an underlying graph.
arXiv Detail & Related papers (2021-10-26T18:44:37Z) - Multilayer Clustered Graph Learning [66.94201299553336]
We use contrastive loss as a data fidelity term, in order to properly aggregate the observed layers into a representative graph.
Experiments show that our method leads to a clustered clusters w.r.t.
We learn a clustering algorithm for solving clustering problems.
arXiv Detail & Related papers (2020-10-29T09:58:02Z) - Towards Deeper Graph Neural Networks [63.46470695525957]
Graph convolutions perform neighborhood aggregation and represent one of the most important graph operations.
Several recent studies attribute this performance deterioration to the over-smoothing issue.
We propose Deep Adaptive Graph Neural Network (DAGNN) to adaptively incorporate information from large receptive fields.
arXiv Detail & Related papers (2020-07-18T01:11:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.