Delving into Identify-Emphasize Paradigm for Combating Unknown Bias
- URL: http://arxiv.org/abs/2302.11414v1
- Date: Wed, 22 Feb 2023 14:50:24 GMT
- Title: Delving into Identify-Emphasize Paradigm for Combating Unknown Bias
- Authors: Bowen Zhao, Chen Chen, Qian-Wei Wang, Anfeng He, Shu-Tao Xia
- Abstract summary: We propose an effective bias-conflicting scoring method (ECS) to boost the identification accuracy.
We also propose gradient alignment (GA) to balance the contributions of the mined bias-aligned and bias-conflicting samples.
Experiments are conducted on multiple datasets in various settings, demonstrating that the proposed solution can mitigate the impact of unknown biases.
- Score: 52.76758938921129
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Dataset biases are notoriously detrimental to model robustness and
generalization. The identify-emphasize paradigm appears to be effective in
dealing with unknown biases. However, we discover that it is still plagued by
two challenges: A, the quality of the identified bias-conflicting samples is
far from satisfactory; B, the emphasizing strategies only produce suboptimal
performance. In this paper, for challenge A, we propose an effective
bias-conflicting scoring method (ECS) to boost the identification accuracy,
along with two practical strategies -- peer-picking and epoch-ensemble. For
challenge B, we point out that the gradient contribution statistics can be a
reliable indicator to inspect whether the optimization is dominated by
bias-aligned samples. Then, we propose gradient alignment (GA), which employs
gradient statistics to balance the contributions of the mined bias-aligned and
bias-conflicting samples dynamically throughout the learning process, forcing
models to leverage intrinsic features to make fair decisions. Furthermore, we
incorporate self-supervised (SS) pretext tasks into training, which enable
models to exploit richer features rather than the simple shortcuts, resulting
in more robust models. Experiments are conducted on multiple datasets in
various settings, demonstrating that the proposed solution can mitigate the
impact of unknown biases and achieve state-of-the-art performance.
Related papers
- A Simple Remedy for Dataset Bias via Self-Influence: A Mislabeled Sample Perspective [33.78421391776591]
In this paper, we propose a novel perspective of mislabeled sample detection.
We show that our new perspective can boost the precision of detection and rectify biased models effectively.
Our approach is complementary to existing methods, showing performance improvement even when applied to models that have already undergone recent debiasing techniques.
arXiv Detail & Related papers (2024-11-01T04:54:32Z) - Uncertainty Aware Learning for Language Model Alignment [97.36361196793929]
We propose uncertainty-aware learning (UAL) to improve the model alignment of different task scenarios.
We implement UAL in a simple fashion -- adaptively setting the label smoothing value of training according to the uncertainty of individual samples.
Experiments on widely used benchmarks demonstrate that our UAL significantly and consistently outperforms standard supervised fine-tuning.
arXiv Detail & Related papers (2024-06-07T11:37:45Z) - Improving Bias Mitigation through Bias Experts in Natural Language
Understanding [10.363406065066538]
We propose a new debiasing framework that introduces binary classifiers between the auxiliary model and the main model.
Our proposed strategy improves the bias identification ability of the auxiliary model.
arXiv Detail & Related papers (2023-12-06T16:15:00Z) - Fast Model Debias with Machine Unlearning [54.32026474971696]
Deep neural networks might behave in a biased manner in many real-world scenarios.
Existing debiasing methods suffer from high costs in bias labeling or model re-training.
We propose a fast model debiasing framework (FMD) which offers an efficient approach to identify, evaluate and remove biases.
arXiv Detail & Related papers (2023-10-19T08:10:57Z) - Causality and Independence Enhancement for Biased Node Classification [56.38828085943763]
We propose a novel Causality and Independence Enhancement (CIE) framework, applicable to various graph neural networks (GNNs)
Our approach estimates causal and spurious features at the node representation level and mitigates the influence of spurious correlations.
Our approach CIE not only significantly enhances the performance of GNNs but outperforms state-of-the-art debiased node classification methods.
arXiv Detail & Related papers (2023-10-14T13:56:24Z) - Feature-Level Debiased Natural Language Understanding [86.8751772146264]
Existing natural language understanding (NLU) models often rely on dataset biases to achieve high performance on specific datasets.
We propose debiasing contrastive learning (DCT) to mitigate biased latent features and neglect the dynamic nature of bias.
DCT outperforms state-of-the-art baselines on out-of-distribution datasets while maintaining in-distribution performance.
arXiv Detail & Related papers (2022-12-11T06:16:14Z) - General Greedy De-bias Learning [163.65789778416172]
We propose a General Greedy De-bias learning framework (GGD), which greedily trains the biased models and the base model like gradient descent in functional space.
GGD can learn a more robust base model under the settings of both task-specific biased models with prior knowledge and self-ensemble biased model without prior knowledge.
arXiv Detail & Related papers (2021-12-20T14:47:32Z) - Learning Debiased Models with Dynamic Gradient Alignment and
Bias-conflicting Sample Mining [39.00256193731365]
Deep neural networks notoriously suffer from dataset biases which are detrimental to model robustness, generalization and fairness.
We propose a two-stage debiasing scheme to combat against the intractable unknown biases.
arXiv Detail & Related papers (2021-11-25T14:50:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.