Information-Theoretic Bias Reduction via Causal View of Spurious
Correlation
- URL: http://arxiv.org/abs/2201.03121v1
- Date: Mon, 10 Jan 2022 01:19:31 GMT
- Title: Information-Theoretic Bias Reduction via Causal View of Spurious
Correlation
- Authors: Seonguk Seo, Joon-Young Lee, Bohyung Han
- Abstract summary: We propose an information-theoretic bias measurement technique through a causal interpretation of spurious correlation.
We present a novel debiasing framework against the algorithmic bias, which incorporates a bias regularization loss.
The proposed bias measurement and debiasing approaches are validated in diverse realistic scenarios.
- Score: 71.9123886505321
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose an information-theoretic bias measurement technique through a
causal interpretation of spurious correlation, which is effective to identify
the feature-level algorithmic bias by taking advantage of conditional mutual
information. Although several bias measurement methods have been proposed and
widely investigated to achieve algorithmic fairness in various tasks such as
face recognition, their accuracy- or logit-based metrics are susceptible to
leading to trivial prediction score adjustment rather than fundamental bias
reduction. Hence, we design a novel debiasing framework against the algorithmic
bias, which incorporates a bias regularization loss derived by the proposed
information-theoretic bias measurement approach. In addition, we present a
simple yet effective unsupervised debiasing technique based on stochastic label
noise, which does not require the explicit supervision of bias information. The
proposed bias measurement and debiasing approaches are validated in diverse
realistic scenarios through extensive experiments on multiple standard
benchmarks.
Related papers
- Looking at Model Debiasing through the Lens of Anomaly Detection [11.113718994341733]
Deep neural networks are sensitive to bias in the data.
We propose a new bias identification method based on anomaly detection.
We reach state-of-the-art performance on synthetic and real benchmark datasets.
arXiv Detail & Related papers (2024-07-24T17:30:21Z) - Causality and Independence Enhancement for Biased Node Classification [56.38828085943763]
We propose a novel Causality and Independence Enhancement (CIE) framework, applicable to various graph neural networks (GNNs)
Our approach estimates causal and spurious features at the node representation level and mitigates the influence of spurious correlations.
Our approach CIE not only significantly enhances the performance of GNNs but outperforms state-of-the-art debiased node classification methods.
arXiv Detail & Related papers (2023-10-14T13:56:24Z) - Mining bias-target Alignment from Voronoi Cells [2.66418345185993]
We propose a bias-agnostic approach to mitigate the impact of bias in deep neural networks.
Unlike traditional debiasing approaches, we rely on a metric to quantify bias alignment/misalignment'' on target classes.
Our results indicate that the proposed method achieves comparable performance to state-of-the-art supervised approaches.
arXiv Detail & Related papers (2023-05-05T17:09:01Z) - Delving into Identify-Emphasize Paradigm for Combating Unknown Bias [52.76758938921129]
We propose an effective bias-conflicting scoring method (ECS) to boost the identification accuracy.
We also propose gradient alignment (GA) to balance the contributions of the mined bias-aligned and bias-conflicting samples.
Experiments are conducted on multiple datasets in various settings, demonstrating that the proposed solution can mitigate the impact of unknown biases.
arXiv Detail & Related papers (2023-02-22T14:50:24Z) - Mind Your Bias: A Critical Review of Bias Detection Methods for
Contextual Language Models [2.170169149901781]
We conduct a rigorous analysis and comparison of bias detection methods for contextual language models.
Our results show that minor design and implementation decisions (or errors) have a substantial and often significant impact on the derived bias scores.
arXiv Detail & Related papers (2022-11-15T19:27:54Z) - A Sandbox Tool to Bias(Stress)-Test Fairness Algorithms [19.86635585740634]
We present the conceptual idea and a first implementation of a bias-injection sandbox tool to investigate fairness consequences of various biases.
Unlike existing toolkits, ours provides a controlled environment to counterfactually inject biases in the ML pipeline.
In particular, we can test whether a given remedy can alleviate the injected bias by comparing the predictions resulting after the intervention with true labels in the unbiased regime-that is, before any bias injection.
arXiv Detail & Related papers (2022-04-21T16:12:19Z) - Balancing out Bias: Achieving Fairness Through Training Reweighting [58.201275105195485]
Bias in natural language processing arises from models learning characteristics of the author such as gender and race.
Existing methods for mitigating and measuring bias do not directly account for correlations between author demographics and linguistic variables.
This paper introduces a very simple but highly effective method for countering bias using instance reweighting.
arXiv Detail & Related papers (2021-09-16T23:40:28Z) - Learning Bias-Invariant Representation by Cross-Sample Mutual
Information Minimization [77.8735802150511]
We propose a cross-sample adversarial debiasing (CSAD) method to remove the bias information misused by the target task.
The correlation measurement plays a critical role in adversarial debiasing and is conducted by a cross-sample neural mutual information estimator.
We conduct thorough experiments on publicly available datasets to validate the advantages of the proposed method over state-of-the-art approaches.
arXiv Detail & Related papers (2021-08-11T21:17:02Z) - Towards causal benchmarking of bias in face analysis algorithms [54.19499274513654]
We develop an experimental method for measuring algorithmic bias of face analysis algorithms.
Our proposed method is based on generating synthetic transects'' of matched sample images.
We validate our method by comparing it to a study that employs the traditional observational method for analyzing bias in gender classification algorithms.
arXiv Detail & Related papers (2020-07-13T17:10:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.