Questioning causality on sex, gender and COVID-19, and identifying bias
in large-scale data-driven analyses: the Bias Priority Recommendations and
Bias Catalog for Pandemics
- URL: http://arxiv.org/abs/2104.14492v1
- Date: Thu, 29 Apr 2021 17:07:06 GMT
- Title: Questioning causality on sex, gender and COVID-19, and identifying bias
in large-scale data-driven analyses: the Bias Priority Recommendations and
Bias Catalog for Pandemics
- Authors: Natalia D\'iaz-Rodr\'iguez, R\=uta Binkyt\.e-Sadauskien\.e, Wafae
Bakkali, Sannidhi Bookseller, Paola Tubaro, Andrius Bacevicius, Raja Chatila
- Abstract summary: We highlight the challenge of making causal claims based on available data, given the lack of statistical significance and potential existence of biases.
We have compiled an encyclopedia-like reference guide, the Bias Catalog for Pandemics, to provide definitions and emphasize realistic examples of bias in general.
The objective is to anticipate and avoid disparate impact and discrimination, by considering causality, explainability, bias and techniques to the latter.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The COVID-19 pandemic has spurred a large amount of observational studies
reporting linkages between the risk of developing severe COVID-19 or dying from
it, and sex and gender. By reviewing a large body of related literature and
conducting a fine grained analysis based on sex-disaggregated data of 61
countries spanning 5 continents, we discover several confounding factors that
could possibly explain the supposed male vulnerability to COVID-19. We thus
highlight the challenge of making causal claims based on available data, given
the lack of statistical significance and potential existence of biases.
Informed by our findings on potential variables acting as confounders, we
contribute a broad overview on the issues bias, explainability and fairness
entail in data-driven analyses. Thus, we outline a set of discriminatory policy
consequences that could, based on such results, lead to unintended
discrimination. To raise awareness on the dimensionality of such foreseen
impacts, we have compiled an encyclopedia-like reference guide, the Bias
Catalog for Pandemics (BCP), to provide definitions and emphasize realistic
examples of bias in general, and within the COVID-19 pandemic context. These
are categorized within a division of bias families and a 2-level priority
scale, together with preventive steps. In addition, we facilitate the Bias
Priority Recommendations on how to best use and apply this catalog, and provide
guidelines in order to address real world research questions. The objective is
to anticipate and avoid disparate impact and discrimination, by considering
causality, explainability, bias and techniques to mitigate the latter. With
these, we hope to 1) contribute to designing and conducting fair and equitable
data-driven studies and research; and 2) interpret and draw meaningful and
actionable conclusions from these.
Related papers
- The Root Shapes the Fruit: On the Persistence of Gender-Exclusive Harms in Aligned Language Models [58.130894823145205]
We center transgender, nonbinary, and other gender-diverse identities to investigate how alignment procedures interact with pre-existing gender-diverse bias.
Our findings reveal that DPO-aligned models are particularly sensitive to supervised finetuning.
We conclude with recommendations tailored to DPO and broader alignment practices.
arXiv Detail & Related papers (2024-11-06T06:50:50Z) - Practical Guide for Causal Pathways and Sub-group Disparity Analysis [1.8974791957167259]
We use causal disparity analysis to quantify and examine the causal interplay between sensitive attributes and outcomes.
Our two-step investigation focuses on datasets where race serves as the sensitive attribute.
We demonstrate that the sub-groups identified by our approach to be affected the most by disparities are the ones with the largest ML classification errors.
arXiv Detail & Related papers (2024-07-02T22:51:01Z) - Decoding Susceptibility: Modeling Misbelief to Misinformation Through a Computational Approach [61.04606493712002]
Susceptibility to misinformation describes the degree of belief in unverifiable claims that is not observable.
Existing susceptibility studies heavily rely on self-reported beliefs.
We propose a computational approach to model users' latent susceptibility levels.
arXiv Detail & Related papers (2023-11-16T07:22:56Z) - Exploring the Jungle of Bias: Political Bias Attribution in Language Models via Dependency Analysis [86.49858739347412]
Large Language Models (LLMs) have sparked intense debate regarding the prevalence of bias in these models and its mitigation.
We propose a prompt-based method for the extraction of confounding and mediating attributes which contribute to the decision process.
We find that the observed disparate treatment can at least in part be attributed to confounding and mitigating attributes and model misalignment.
arXiv Detail & Related papers (2023-11-15T00:02:25Z) - Causality and Independence Enhancement for Biased Node Classification [56.38828085943763]
We propose a novel Causality and Independence Enhancement (CIE) framework, applicable to various graph neural networks (GNNs)
Our approach estimates causal and spurious features at the node representation level and mitigates the influence of spurious correlations.
Our approach CIE not only significantly enhances the performance of GNNs but outperforms state-of-the-art debiased node classification methods.
arXiv Detail & Related papers (2023-10-14T13:56:24Z) - Targeted Data Augmentation for bias mitigation [0.0]
We introduce a novel and efficient approach for addressing biases called Targeted Data Augmentation (TDA)
Unlike the laborious task of removing biases, our method proposes to insert biases instead, resulting in improved performance.
To identify biases, we annotated two diverse datasets: a dataset of clinical skin lesions and a dataset of male and female faces.
arXiv Detail & Related papers (2023-08-22T12:25:49Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - Bounding Counterfactuals under Selection Bias [60.55840896782637]
We propose a first algorithm to address both identifiable and unidentifiable queries.
We prove that, in spite of the missingness induced by the selection bias, the likelihood of the available data is unimodal.
arXiv Detail & Related papers (2022-07-26T10:33:10Z) - Toward Understanding Bias Correlations for Mitigation in NLP [34.956581421295]
This work aims to provide a first systematic study toward understanding bias correlations in mitigation.
We examine bias mitigation in two common NLP tasks -- toxicity detection and word embeddings.
Our findings suggest that biases are correlated and present scenarios in which independent debiasing approaches may be insufficient.
arXiv Detail & Related papers (2022-05-24T22:48:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.