Causal effect of racial bias in data and machine learning algorithms on
user persuasiveness & discriminatory decision making: An Empirical Study
- URL: http://arxiv.org/abs/2202.00471v2
- Date: Wed, 2 Feb 2022 03:20:41 GMT
- Title: Causal effect of racial bias in data and machine learning algorithms on
user persuasiveness & discriminatory decision making: An Empirical Study
- Authors: Kinshuk Sengupta and Praveen Ranjan Srivastava
- Abstract summary: Language data and models demonstrate various types of bias, be it ethnic, religious, gender, or socioeconomic.
The motivation of the study is to investigate how AI systems imbibe bias from data and produce unexplainable discriminatory outcomes.
The paper bridges the gap across the harm caused in establishing poor customer trustworthiness due to an inequitable system design.
- Score: 1.713291434132985
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Language data and models demonstrate various types of bias, be it ethnic,
religious, gender, or socioeconomic. AI/NLP models, when trained on the
racially biased dataset, AI/NLP models instigate poor model explainability,
influence user experience during decision making and thus further magnifies
societal biases, raising profound ethical implications for society. The
motivation of the study is to investigate how AI systems imbibe bias from data
and produce unexplainable discriminatory outcomes and influence an individual's
articulateness of system outcome due to the presence of racial bias features in
datasets. The design of the experiment involves studying the counterfactual
impact of racial bias features present in language datasets and its associated
effect on the model outcome. A mixed research methodology is adopted to
investigate the cross implication of biased model outcome on user experience,
effect on decision-making through controlled lab experimentation. The findings
provide foundation support for correlating the implication of carry-over an
artificial intelligence model solving NLP task due to biased concept presented
in the dataset. Further, the research outcomes justify the negative influence
on users' persuasiveness that leads to alter the decision-making quotient of an
individual when trying to rely on the model outcome to act. The paper bridges
the gap across the harm caused in establishing poor customer trustworthiness
due to an inequitable system design and provides strong support for
researchers, policymakers, and data scientists to build responsible AI
frameworks within organizations.
Related papers
- GPT in Data Science: A Practical Exploration of Model Selection [0.7646713951724013]
This research is committed to advancing our comprehension of AI decision-making processes.
Our efforts are directed towards creating AI systems that are more transparent and comprehensible.
arXiv Detail & Related papers (2023-11-20T03:42:24Z) - Decoding Susceptibility: Modeling Misbelief to Misinformation Through a
Computational Approach [63.67533153887132]
Susceptibility to misinformation describes the degree of belief in unverifiable claims that is not observable.
Existing susceptibility studies heavily rely on self-reported beliefs.
We propose a computational approach to model users' latent susceptibility levels.
arXiv Detail & Related papers (2023-11-16T07:22:56Z) - Fast Model Debias with Machine Unlearning [54.32026474971696]
Deep neural networks might behave in a biased manner in many real-world scenarios.
Existing debiasing methods suffer from high costs in bias labeling or model re-training.
We propose a fast model debiasing framework (FMD) which offers an efficient approach to identify, evaluate and remove biases.
arXiv Detail & Related papers (2023-10-19T08:10:57Z) - Sensitivity, Performance, Robustness: Deconstructing the Effect of
Sociodemographic Prompting [64.80538055623842]
sociodemographic prompting is a technique that steers the output of prompt-based models towards answers that humans with specific sociodemographic profiles would give.
We show that sociodemographic information affects model predictions and can be beneficial for improving zero-shot learning in subjective NLP tasks.
arXiv Detail & Related papers (2023-09-13T15:42:06Z) - Ground(less) Truth: A Causal Framework for Proxy Labels in
Human-Algorithm Decision-Making [29.071173441651734]
We identify five sources of target variable bias that can impact the validity of proxy labels in human-AI decision-making tasks.
We develop a causal framework to disentangle the relationship between each bias.
We conclude by discussing opportunities to better address target variable bias in future research.
arXiv Detail & Related papers (2023-02-13T16:29:11Z) - Picking on the Same Person: Does Algorithmic Monoculture lead to Outcome
Homogenization? [90.35044668396591]
A recurring theme in machine learning is algorithmic monoculture: the same systems, or systems that share components, are deployed by multiple decision-makers.
We propose the component-sharing hypothesis: if decision-makers share components like training data or specific models, then they will produce more homogeneous outcomes.
We test this hypothesis on algorithmic fairness benchmarks, demonstrating that sharing training data reliably exacerbates homogenization.
We conclude with philosophical analyses of and societal challenges for outcome homogenization, with an eye towards implications for deployed machine learning systems.
arXiv Detail & Related papers (2022-11-25T09:33:11Z) - Investigating Bias with a Synthetic Data Generator: Empirical Evidence
and Philosophical Interpretation [66.64736150040093]
Machine learning applications are becoming increasingly pervasive in our society.
Risk is that they will systematically spread the bias embedded in data.
We propose to analyze biases by introducing a framework for generating synthetic data with specific types of bias and their combinations.
arXiv Detail & Related papers (2022-09-13T11:18:50Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - Investigations of Performance and Bias in Human-AI Teamwork in Hiring [30.046502708053097]
In AI-assisted decision-making, effective hybrid teamwork (human-AI) is not solely dependent on AI performance alone.
We investigate how both a model's predictive performance and bias may transfer to humans in a recommendation-aided decision task.
arXiv Detail & Related papers (2022-02-21T17:58:07Z) - Fairness-aware Summarization for Justified Decision-Making [16.47665757950391]
We focus on the problem of (un)fairness in the justification of the text-based neural models.
We propose a fairness-aware summarization mechanism to detect and counteract the bias in such models.
arXiv Detail & Related papers (2021-07-13T17:04:10Z) - Model Selection's Disparate Impact in Real-World Deep Learning
Applications [3.924854655504237]
Algorithmic fairness has emphasized the role of biased data in automated decision outcomes.
We contend that one source of such bias, human preferences in model selection, remains under-explored in terms of its role in disparate impact across demographic groups.
arXiv Detail & Related papers (2021-04-01T16:37:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.