Discover and Mitigate Unknown Biases with Debiasing Alternate Networks
- URL: http://arxiv.org/abs/2207.10077v2
- Date: Wed, 7 Sep 2022 21:09:11 GMT
- Title: Discover and Mitigate Unknown Biases with Debiasing Alternate Networks
- Authors: Zhiheng Li, Anthony Hoogs, Chenliang Xu
- Abstract summary: We propose Debiasing Alternate Networks (DebiAN), which comprises two networks -- a Discoverer and a classifier.
DebiAN aims at unlearning the biases identified by the discoverer.
While previous works evaluate debiasing results in terms of a single bias, we create Multi-Color MNIST dataset to better benchmark mitigation of multiple biases.
- Score: 42.89260385194433
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep image classifiers have been found to learn biases from datasets. To
mitigate the biases, most previous methods require labels of protected
attributes (e.g., age, skin tone) as full-supervision, which has two
limitations: 1) it is infeasible when the labels are unavailable; 2) they are
incapable of mitigating unknown biases -- biases that humans do not
preconceive. To resolve those problems, we propose Debiasing Alternate Networks
(DebiAN), which comprises two networks -- a Discoverer and a Classifier. By
training in an alternate manner, the discoverer tries to find multiple unknown
biases of the classifier without any annotations of biases, and the classifier
aims at unlearning the biases identified by the discoverer. While previous
works evaluate debiasing results in terms of a single bias, we create
Multi-Color MNIST dataset to better benchmark mitigation of multiple biases in
a multi-bias setting, which not only reveals the problems in previous methods
but also demonstrates the advantage of DebiAN in identifying and mitigating
multiple biases simultaneously. We further conduct extensive experiments on
real-world datasets, showing that the discoverer in DebiAN can identify unknown
biases that may be hard to be found by humans. Regarding debiasing, DebiAN
achieves strong bias mitigation performance.
Related papers
- Debiasify: Self-Distillation for Unsupervised Bias Mitigation [19.813054813868476]
Simplicity bias poses a significant challenge in neural networks, often leading models to favor simpler solutions and inadvertently learn decision rules influenced by spurious correlations.
We introduce Debiasify, a novel self-distillation approach that requires no prior knowledge about the nature of biases.
Our method leverages a new distillation loss to transfer knowledge within the network, from deeper layers containing complex, highly-predictive features to shallower layers with simpler, attribute-conditioned features in an unsupervised manner.
arXiv Detail & Related papers (2024-11-01T16:25:05Z) - Is There a One-Model-Fits-All Approach to Information Extraction? Revisiting Task Definition Biases [62.806300074459116]
Definition bias is a negative phenomenon that can mislead models.
We identify two types of definition bias in IE: bias among information extraction datasets and bias between information extraction datasets and instruction tuning datasets.
We propose a multi-stage framework consisting of definition bias measurement, bias-aware fine-tuning, and task-specific bias mitigation.
arXiv Detail & Related papers (2024-03-25T03:19:20Z) - Medical Image Debiasing by Learning Adaptive Agreement from a Biased
Council [8.530912655468645]
Deep learning could be prone to learning shortcuts raised by dataset bias.
Despite its significance, there is a dearth of research in the medical image classification domain to address dataset bias.
This paper proposes learning Adaptive Agreement from a Biased Council (Ada-ABC), a debiasing framework that does not rely on explicit bias labels.
arXiv Detail & Related papers (2024-01-22T06:29:52Z) - Causality and Independence Enhancement for Biased Node Classification [56.38828085943763]
We propose a novel Causality and Independence Enhancement (CIE) framework, applicable to various graph neural networks (GNNs)
Our approach estimates causal and spurious features at the node representation level and mitigates the influence of spurious correlations.
Our approach CIE not only significantly enhances the performance of GNNs but outperforms state-of-the-art debiased node classification methods.
arXiv Detail & Related papers (2023-10-14T13:56:24Z) - SMoA: Sparse Mixture of Adapters to Mitigate Multiple Dataset Biases [27.56143777363971]
We propose a new debiasing method Sparse Mixture-of-Adapters (SMoA), which can mitigate multiple dataset biases effectively and efficiently.
Experiments on Natural Language Inference and Paraphrase Identification tasks demonstrate that SMoA outperforms full-finetuning, adapter tuning baselines, and prior strong debiasing methods.
arXiv Detail & Related papers (2023-02-28T08:47:20Z) - Whole Page Unbiased Learning to Rank [59.52040055543542]
Unbiased Learning to Rank(ULTR) algorithms are proposed to learn an unbiased ranking model with biased click data.
We propose a Bias Agnostic whole-page unbiased Learning to rank algorithm, named BAL, to automatically find the user behavior model.
Experimental results on a real-world dataset verify the effectiveness of the BAL.
arXiv Detail & Related papers (2022-10-19T16:53:08Z) - Unsupervised Learning of Unbiased Visual Representations [10.871587311621974]
Deep neural networks are known for their inability to learn robust representations when biases exist in the dataset.
We propose a fully unsupervised debiasing framework, consisting of three steps.
We employ state-of-the-art supervised debiasing techniques to obtain an unbiased model.
arXiv Detail & Related papers (2022-04-26T10:51:50Z) - Pseudo Bias-Balanced Learning for Debiased Chest X-ray Classification [57.53567756716656]
We study the problem of developing debiased chest X-ray diagnosis models without knowing exactly the bias labels.
We propose a novel algorithm, pseudo bias-balanced learning, which first captures and predicts per-sample bias labels.
Our proposed method achieved consistent improvements over other state-of-the-art approaches.
arXiv Detail & Related papers (2022-03-18T11:02:18Z) - Fighting Fire with Fire: Contrastive Debiasing without Bias-free Data
via Generative Bias-transformation [31.944147533327058]
Contrastive Debiasing via Generative Bias-transformation (CDvG)
We propose a novel method, Contrastive Debiasing via Generative Bias-transformation (CDvG), which works without explicit bias labels or bias-free samples.
Our method demonstrates superior performance compared to prior approaches, especially when bias-free samples are scarce or absent.
arXiv Detail & Related papers (2021-12-02T07:16:06Z) - Learning from Failure: Training Debiased Classifier from Biased
Classifier [76.52804102765931]
We show that neural networks learn to rely on spurious correlation only when it is "easier" to learn than the desired knowledge.
We propose a failure-based debiasing scheme by training a pair of neural networks simultaneously.
Our method significantly improves the training of the network against various types of biases in both synthetic and real-world datasets.
arXiv Detail & Related papers (2020-07-06T07:20:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.