The Impact of Presentation Style on Human-In-The-Loop Detection of
Algorithmic Bias
- URL: http://arxiv.org/abs/2004.12388v3
- Date: Sat, 9 May 2020 22:30:29 GMT
- Title: The Impact of Presentation Style on Human-In-The-Loop Detection of
Algorithmic Bias
- Authors: Po-Ming Law, Sana Malik, Fan Du, Moumita Sinha
- Abstract summary: Semi-automated bias detection tools often present reports of automatically-detected biases using a recommendation list or visual cues.
We investigated how presentation style might affect user behaviors in reviewing bias reports.
We propose information load and comprehensiveness as two axes for characterizing bias detection tasks.
- Score: 18.05880738470364
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While decision makers have begun to employ machine learning, machine learning
models may make predictions that bias against certain demographic groups.
Semi-automated bias detection tools often present reports of
automatically-detected biases using a recommendation list or visual cues.
However, there is a lack of guidance concerning which presentation style to use
in what scenarios. We conducted a small lab study with 16 participants to
investigate how presentation style might affect user behaviors in reviewing
bias reports. Participants used both a prototype with a recommendation list and
a prototype with visual cues for bias detection. We found that participants
often wanted to investigate the performance measures that were not
automatically detected as biases. Yet, when using the prototype with a
recommendation list, they tended to give less consideration to such measures.
Grounded in the findings, we propose information load and comprehensiveness as
two axes for characterizing bias detection tasks and illustrate how the two
axes could be adopted to reason about when to use a recommendation list or
visual cues.
Related papers
- Counterfactual Augmentation for Multimodal Learning Under Presentation
Bias [48.372326930638025]
In machine learning systems, feedback loops between users and models can bias future user behavior, inducing a presentation bias in labels.
We propose counterfactual augmentation, a novel causal method for correcting presentation bias using generated counterfactual labels.
arXiv Detail & Related papers (2023-05-23T14:09:47Z) - Whole Page Unbiased Learning to Rank [59.52040055543542]
Unbiased Learning to Rank(ULTR) algorithms are proposed to learn an unbiased ranking model with biased click data.
We propose a Bias Agnostic whole-page unbiased Learning to rank algorithm, named BAL, to automatically find the user behavior model.
Experimental results on a real-world dataset verify the effectiveness of the BAL.
arXiv Detail & Related papers (2022-10-19T16:53:08Z) - BiaScope: Visual Unfairness Diagnosis for Graph Embeddings [8.442750346008431]
We present BiaScope, an interactive visualization tool that supports end-to-end visual unfairness diagnosis for graph embeddings.
It allows the user to (i.e. visually compare two embeddings with respect to fairness, (ii) locate nodes or graph communities that are unfairly embedded, and (iii) understand the source of bias by interactively linking the relevant embedding subspace with the corresponding graph topology.
arXiv Detail & Related papers (2022-10-12T17:12:19Z) - MRCLens: an MRC Dataset Bias Detection Toolkit [82.44296974850639]
We introduce MRCLens, a toolkit that detects whether biases exist before users train the full model.
For the convenience of introducing the toolkit, we also provide a categorization of common biases in MRC.
arXiv Detail & Related papers (2022-07-18T21:05:39Z) - More Data Can Lead Us Astray: Active Data Acquisition in the Presence of
Label Bias [7.506786114760462]
Proposed bias mitigation strategies typically overlook the bias presented in the observed labels.
We first present an overview of different types of label bias in the context of supervised learning systems.
We then empirically show that, when overlooking label bias, collecting more data can aggravate bias, and imposing fairness constraints that rely on the observed labels in the data collection process may not address the problem.
arXiv Detail & Related papers (2022-07-15T19:30:50Z) - Information-Theoretic Bias Reduction via Causal View of Spurious
Correlation [71.9123886505321]
We propose an information-theoretic bias measurement technique through a causal interpretation of spurious correlation.
We present a novel debiasing framework against the algorithmic bias, which incorporates a bias regularization loss.
The proposed bias measurement and debiasing approaches are validated in diverse realistic scenarios.
arXiv Detail & Related papers (2022-01-10T01:19:31Z) - Learning Debiased Models with Dynamic Gradient Alignment and
Bias-conflicting Sample Mining [39.00256193731365]
Deep neural networks notoriously suffer from dataset biases which are detrimental to model robustness, generalization and fairness.
We propose a two-stage debiasing scheme to combat against the intractable unknown biases.
arXiv Detail & Related papers (2021-11-25T14:50:10Z) - Balancing out Bias: Achieving Fairness Through Training Reweighting [58.201275105195485]
Bias in natural language processing arises from models learning characteristics of the author such as gender and race.
Existing methods for mitigating and measuring bias do not directly account for correlations between author demographics and linguistic variables.
This paper introduces a very simple but highly effective method for countering bias using instance reweighting.
arXiv Detail & Related papers (2021-09-16T23:40:28Z) - Just Label What You Need: Fine-Grained Active Selection for Perception
and Prediction through Partially Labeled Scenes [78.23907801786827]
We introduce generalizations that ensure that our approach is both cost-aware and allows for fine-grained selection of examples through partially labeled scenes.
Our experiments on a real-world, large-scale self-driving dataset suggest that fine-grained selection can improve the performance across perception, prediction, and downstream planning tasks.
arXiv Detail & Related papers (2021-04-08T17:57:41Z) - A survey of bias in Machine Learning through the prism of Statistical
Parity for the Adult Data Set [5.277804553312449]
We show the importance of understanding how a bias can be introduced into automatic decisions.
We first present a mathematical framework for the fair learning problem, specifically in the binary classification setting.
We then propose to quantify the presence of bias by using the standard Disparate Impact index on the real and well-known Adult income data set.
arXiv Detail & Related papers (2020-03-31T14:48:36Z) - Designing Tools for Semi-Automated Detection of Machine Learning Biases:
An Interview Study [18.05880738470364]
We report on an interview study with 11 machine learning practitioners for investigating the needs surrounding semi-automated bias detection tools.
Based on the findings, we highlight four considerations in designing to guide system designers who aim to create future tools for bias detection.
arXiv Detail & Related papers (2020-03-13T00:18:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.