Causality and Independence Enhancement for Biased Node Classification
- URL: http://arxiv.org/abs/2310.09586v2
- Date: Sun, 5 Nov 2023 01:19:21 GMT
- Title: Causality and Independence Enhancement for Biased Node Classification
- Authors: Guoxin Chen, Yongqing Wang, Fangda Guo, Qinglang Guo, Jiangli Shao,
Huawei Shen and Xueqi Cheng
- Abstract summary: We propose a novel Causality and Independence Enhancement (CIE) framework, applicable to various graph neural networks (GNNs)
Our approach estimates causal and spurious features at the node representation level and mitigates the influence of spurious correlations.
Our approach CIE not only significantly enhances the performance of GNNs but outperforms state-of-the-art debiased node classification methods.
- Score: 56.38828085943763
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most existing methods that address out-of-distribution (OOD) generalization
for node classification on graphs primarily focus on a specific type of data
biases, such as label selection bias or structural bias. However, anticipating
the type of bias in advance is extremely challenging, and designing models
solely for one specific type may not necessarily improve overall generalization
performance. Moreover, limited research has focused on the impact of mixed
biases, which are more prevalent and demanding in real-world scenarios. To
address these limitations, we propose a novel Causality and Independence
Enhancement (CIE) framework, applicable to various graph neural networks
(GNNs). Our approach estimates causal and spurious features at the node
representation level and mitigates the influence of spurious correlations
through the backdoor adjustment. Meanwhile, independence constraint is
introduced to improve the discriminability and stability of causal and spurious
features in complex biased environments. Essentially, CIE eliminates different
types of data biases from a unified perspective, without the need to design
separate methods for each bias as before. To evaluate the performance under
specific types of data biases, mixed biases, and low-resource scenarios, we
conducted comprehensive experiments on five publicly available datasets.
Experimental results demonstrate that our approach CIE not only significantly
enhances the performance of GNNs but outperforms state-of-the-art debiased node
classification methods.
Related papers
- Debiasify: Self-Distillation for Unsupervised Bias Mitigation [19.813054813868476]
Simplicity bias poses a significant challenge in neural networks, often leading models to favor simpler solutions and inadvertently learn decision rules influenced by spurious correlations.
We introduce Debiasify, a novel self-distillation approach that requires no prior knowledge about the nature of biases.
Our method leverages a new distillation loss to transfer knowledge within the network, from deeper layers containing complex, highly-predictive features to shallower layers with simpler, attribute-conditioned features in an unsupervised manner.
arXiv Detail & Related papers (2024-11-01T16:25:05Z) - Graph Out-of-Distribution Generalization via Causal Intervention [69.70137479660113]
We introduce a conceptually simple yet principled approach for training robust graph neural networks (GNNs) under node-level distribution shifts.
Our method resorts to a new learning objective derived from causal inference that coordinates an environment estimator and a mixture-of-expert GNN predictor.
Our model can effectively enhance generalization with various types of distribution shifts and yield up to 27.4% accuracy improvement over state-of-the-arts on graph OOD generalization benchmarks.
arXiv Detail & Related papers (2024-02-18T07:49:22Z) - Improving Bias Mitigation through Bias Experts in Natural Language
Understanding [10.363406065066538]
We propose a new debiasing framework that introduces binary classifiers between the auxiliary model and the main model.
Our proposed strategy improves the bias identification ability of the auxiliary model.
arXiv Detail & Related papers (2023-12-06T16:15:00Z) - Delving into Identify-Emphasize Paradigm for Combating Unknown Bias [52.76758938921129]
We propose an effective bias-conflicting scoring method (ECS) to boost the identification accuracy.
We also propose gradient alignment (GA) to balance the contributions of the mined bias-aligned and bias-conflicting samples.
Experiments are conducted on multiple datasets in various settings, demonstrating that the proposed solution can mitigate the impact of unknown biases.
arXiv Detail & Related papers (2023-02-22T14:50:24Z) - Feature-Level Debiased Natural Language Understanding [86.8751772146264]
Existing natural language understanding (NLU) models often rely on dataset biases to achieve high performance on specific datasets.
We propose debiasing contrastive learning (DCT) to mitigate biased latent features and neglect the dynamic nature of bias.
DCT outperforms state-of-the-art baselines on out-of-distribution datasets while maintaining in-distribution performance.
arXiv Detail & Related papers (2022-12-11T06:16:14Z) - Debiased Graph Neural Networks with Agnostic Label Selection Bias [59.61301255860836]
Most existing Graph Neural Networks (GNNs) are proposed without considering the selection bias in data.
We propose a novel Debiased Graph Neural Networks (DGNN) with a differentiated decorrelation regularizer.
Our proposed model outperforms the state-of-the-art methods and DGNN is a flexible framework to enhance existing GNNs.
arXiv Detail & Related papers (2022-01-19T16:50:29Z) - General Greedy De-bias Learning [163.65789778416172]
We propose a General Greedy De-bias learning framework (GGD), which greedily trains the biased models and the base model like gradient descent in functional space.
GGD can learn a more robust base model under the settings of both task-specific biased models with prior knowledge and self-ensemble biased model without prior knowledge.
arXiv Detail & Related papers (2021-12-20T14:47:32Z) - Learning Debiased Models with Dynamic Gradient Alignment and
Bias-conflicting Sample Mining [39.00256193731365]
Deep neural networks notoriously suffer from dataset biases which are detrimental to model robustness, generalization and fairness.
We propose a two-stage debiasing scheme to combat against the intractable unknown biases.
arXiv Detail & Related papers (2021-11-25T14:50:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.