Bias: Friend or Foe? User Acceptance of Gender Stereotypes in Automated
Career Recommendations
- URL: http://arxiv.org/abs/2106.07112v1
- Date: Sun, 13 Jun 2021 23:27:45 GMT
- Title: Bias: Friend or Foe? User Acceptance of Gender Stereotypes in Automated
Career Recommendations
- Authors: Clarice Wang, Kathryn Wang, Andrew Bian, Rashidul Islam, Kamrun Naher
Keya, James Foulde, Shimei Pan
- Abstract summary: We show that a fair AI algorithm on its own may be insufficient to achieve its intended results in the real world.
Using career recommendation as a case study, we build a fair AI career recommender by employing gender debiasing machine learning techniques.
- Score: 8.44485053836748
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Currently, there is a surge of interest in fair Artificial Intelligence (AI)
and Machine Learning (ML) research which aims to mitigate discriminatory bias
in AI algorithms, e.g. along lines of gender, age, and race. While most
research in this domain focuses on developing fair AI algorithms, in this work,
we show that a fair AI algorithm on its own may be insufficient to achieve its
intended results in the real world. Using career recommendation as a case
study, we build a fair AI career recommender by employing gender debiasing
machine learning techniques. Our offline evaluation showed that the debiased
recommender makes fairer career recommendations without sacrificing its
accuracy. Nevertheless, an online user study of more than 200 college students
revealed that participants on average prefer the original biased system over
the debiased system. Specifically, we found that perceived gender disparity is
a determining factor for the acceptance of a recommendation. In other words,
our results demonstrate we cannot fully address the gender bias issue in AI
recommendations without addressing the gender bias in humans.
Related papers
- Revealing and Reducing Gender Biases in Vision and Language Assistants (VLAs) [82.57490175399693]
We study gender bias in 22 popular image-to-text vision-language assistants (VLAs)
Our results show that VLAs replicate human biases likely present in the data, such as real-world occupational imbalances.
To eliminate the gender bias in these models, we find that finetuning-based debiasing methods achieve the best tradeoff between debiasing and retaining performance on downstream tasks.
arXiv Detail & Related papers (2024-10-25T05:59:44Z) - The Pursuit of Fairness in Artificial Intelligence Models: A Survey [2.124791625488617]
This survey offers a synopsis of the different ways researchers have promoted fairness in AI systems.
A thorough study is conducted of the approaches and techniques employed by researchers to mitigate bias in AI models.
We also delve into the impact of biased models on user experience and the ethical considerations to contemplate when developing and deploying such models.
arXiv Detail & Related papers (2024-03-26T02:33:36Z) - "I'm Not Confident in Debiasing AI Systems Since I Know Too Little":
Teaching AI Creators About Gender Bias Through Hands-on Tutorials [11.823789408603908]
School curricula fail to educate AI creators on this topic.
Gender bias is rampant in AI systems, causing bad user experience, injustices, and mental harm to women.
arXiv Detail & Related papers (2023-09-15T03:09:36Z) - Human-Centric Multimodal Machine Learning: Recent Advances and Testbed
on AI-based Recruitment [66.91538273487379]
There is a certain consensus about the need to develop AI applications with a Human-Centric approach.
Human-Centric Machine Learning needs to be developed based on four main requirements: (i) utility and social good; (ii) privacy and data ownership; (iii) transparency and accountability; and (iv) fairness in AI-driven decision-making processes.
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
arXiv Detail & Related papers (2023-02-13T16:44:44Z) - Whole Page Unbiased Learning to Rank [59.52040055543542]
Unbiased Learning to Rank(ULTR) algorithms are proposed to learn an unbiased ranking model with biased click data.
We propose a Bias Agnostic whole-page unbiased Learning to rank algorithm, named BAL, to automatically find the user behavior model.
Experimental results on a real-world dataset verify the effectiveness of the BAL.
arXiv Detail & Related papers (2022-10-19T16:53:08Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - Fair Representation Learning for Heterogeneous Information Networks [35.80367469624887]
We propose a comprehensive set of de-biasing methods for fair HINs representation learning.
We study the behavior of these algorithms, especially their capability in balancing the trade-off between fairness and prediction accuracy.
We evaluate the performance of the proposed methods in an automated career counseling application.
arXiv Detail & Related papers (2021-04-18T08:28:18Z) - On the Basis of Sex: A Review of Gender Bias in Machine Learning
Applications [0.0]
We first introduce several examples of machine learning gender bias in practice.
We then detail the most widely used formalizations of fairness in order to address how to make machine learning models fairer.
arXiv Detail & Related papers (2021-04-06T14:11:16Z) - Biased Programmers? Or Biased Data? A Field Experiment in
Operationalizing AI Ethics [6.946103498518291]
We evaluate 8.2 million algorithmic predictions of math performance from $approx$400 AI engineers.
We find that biased predictions are mostly caused by biased training data.
One-third of the benefit of better training data comes through a novel economic mechanism.
arXiv Detail & Related papers (2020-12-04T04:12:33Z) - Gender Stereotype Reinforcement: Measuring the Gender Bias Conveyed by
Ranking Algorithms [68.85295025020942]
We propose the Gender Stereotype Reinforcement (GSR) measure, which quantifies the tendency of a Search Engines to support gender stereotypes.
GSR is the first specifically tailored measure for Information Retrieval, capable of quantifying representational harms.
arXiv Detail & Related papers (2020-09-02T20:45:04Z) - Bias in Multimodal AI: Testbed for Fair Automatic Recruitment [73.85525896663371]
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
We train automatic recruitment algorithms using a set of multimodal synthetic profiles consciously scored with gender and racial biases.
Our methodology and results show how to generate fairer AI-based tools in general, and in particular fairer automated recruitment systems.
arXiv Detail & Related papers (2020-04-15T15:58:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.