Detection and Mitigation of Bias in Ted Talk Ratings
- URL: http://arxiv.org/abs/2003.00683v1
- Date: Mon, 2 Mar 2020 06:13:24 GMT
- Title: Detection and Mitigation of Bias in Ted Talk Ratings
- Authors: Rupam Acharyya, Shouman Das, Ankani Chattoraj, Oishani Sengupta, Md
Iftekar Tanveer
- Abstract summary: Implicit bias is a behavioral conditioning that leads us to attribute predetermined characteristics to members of certain groups.
This paper quantifies implicit bias in viewer ratings of TEDTalks, a diverse social platform assessing social and professional performance.
- Score: 3.3598755777055374
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unbiased data collection is essential to guaranteeing fairness in artificial
intelligence models. Implicit bias, a form of behavioral conditioning that
leads us to attribute predetermined characteristics to members of certain
groups and informs the data collection process. This paper quantifies implicit
bias in viewer ratings of TEDTalks, a diverse social platform assessing social
and professional performance, in order to present the correlations of different
kinds of bias across sensitive attributes. Although the viewer ratings of these
videos should purely reflect the speaker's competence and skill, our analysis
of the ratings demonstrates the presence of overwhelming and predominant
implicit bias with respect to race and gender. In our paper, we present
strategies to detect and mitigate bias that are critical to removing unfairness
in AI.
Related papers
- Intertwined Biases Across Social Media Spheres: Unpacking Correlations in Media Bias Dimensions [12.588239777597847]
Media bias significantly shapes public perception by reinforcing stereotypes and exacerbating societal divisions.
We introduce a novel dataset collected from YouTube and Reddit over the past five years.
Our dataset includes automated annotations for YouTube content across a broad spectrum of bias dimensions.
arXiv Detail & Related papers (2024-08-27T21:03:42Z) - Mitigating Biases in Collective Decision-Making: Enhancing Performance in the Face of Fake News [4.413331329339185]
We study the influence these biases can have in the pervasive problem of fake news by evaluating human participants' capacity to identify false headlines.
By focusing on headlines involving sensitive characteristics, we gather a comprehensive dataset to explore how human responses are shaped by their biases.
We show that demographic factors, headline categories, and the manner in which information is presented significantly influence errors in human judgment.
arXiv Detail & Related papers (2024-03-11T12:08:08Z) - Quantifying Bias in Text-to-Image Generative Models [49.60774626839712]
Bias in text-to-image (T2I) models can propagate unfair social representations and may be used to aggressively market ideas or push controversial agendas.
Existing T2I model bias evaluation methods only focus on social biases.
We propose an evaluation methodology to quantify general biases in T2I generative models, without any preconceived notions.
arXiv Detail & Related papers (2023-12-20T14:26:54Z) - Gender Biases in Automatic Evaluation Metrics for Image Captioning [87.15170977240643]
We conduct a systematic study of gender biases in model-based evaluation metrics for image captioning tasks.
We demonstrate the negative consequences of using these biased metrics, including the inability to differentiate between biased and unbiased generations.
We present a simple and effective way to mitigate the metric bias without hurting the correlations with human judgments.
arXiv Detail & Related papers (2023-05-24T04:27:40Z) - Unveiling the Hidden Agenda: Biases in News Reporting and Consumption [59.55900146668931]
We build a six-year dataset on the Italian vaccine debate and adopt a Bayesian latent space model to identify narrative and selection biases.
We found a nonlinear relationship between biases and engagement, with higher engagement for extreme positions.
Analysis of news consumption on Twitter reveals common audiences among news outlets with similar ideological positions.
arXiv Detail & Related papers (2023-01-14T18:58:42Z) - No Word Embedding Model Is Perfect: Evaluating the Representation
Accuracy for Social Bias in the Media [17.4812995898078]
We study what kind of embedding algorithm serves best to accurately measure types of social bias known to exist in US online news articles.
We collect 500k articles and review psychology literature with respect to expected social bias.
We compare how models trained with the algorithms on news articles represent the expected social bias.
arXiv Detail & Related papers (2022-11-07T15:45:52Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - The SAME score: Improved cosine based bias score for word embeddings [49.75878234192369]
We introduce SAME, a novel bias score for semantic bias in embeddings.
We show that SAME is capable of measuring semantic bias and identify potential causes for social bias in downstream tasks.
arXiv Detail & Related papers (2022-03-28T09:28:13Z) - Balancing out Bias: Achieving Fairness Through Training Reweighting [58.201275105195485]
Bias in natural language processing arises from models learning characteristics of the author such as gender and race.
Existing methods for mitigating and measuring bias do not directly account for correlations between author demographics and linguistic variables.
This paper introduces a very simple but highly effective method for countering bias using instance reweighting.
arXiv Detail & Related papers (2021-09-16T23:40:28Z) - Enabling News Consumers to View and Understand Biased News Coverage: A
Study on the Perception and Visualization of Media Bias [7.092487352312782]
We create three manually annotated datasets and test varying visualization strategies.
Results show no strong effects of becoming aware of the bias of the treatment groups compared to the control group.
Using a multilevel model, we find that perceived journalist bias is significantly related to perceived political extremeness and impartiality of the article.
arXiv Detail & Related papers (2021-05-20T10:16:54Z) - Grading video interviews with fairness considerations [1.7403133838762446]
We present a methodology to automatically derive social skills of candidates based on their video response to interview questions.
We develop two machine-learning models to predict social skills.
We analyze fairness by studying the errors of models by race and gender.
arXiv Detail & Related papers (2020-07-02T10:06:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.