Designing Tools for Semi-Automated Detection of Machine Learning Biases:
An Interview Study
- URL: http://arxiv.org/abs/2003.07680v2
- Date: Wed, 18 Mar 2020 01:41:40 GMT
- Title: Designing Tools for Semi-Automated Detection of Machine Learning Biases:
An Interview Study
- Authors: Po-Ming Law, Sana Malik, Fan Du, Moumita Sinha
- Abstract summary: We report on an interview study with 11 machine learning practitioners for investigating the needs surrounding semi-automated bias detection tools.
Based on the findings, we highlight four considerations in designing to guide system designers who aim to create future tools for bias detection.
- Score: 18.05880738470364
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning models often make predictions that bias against certain
subgroups of input data. When undetected, machine learning biases can
constitute significant financial and ethical implications. Semi-automated tools
that involve humans in the loop could facilitate bias detection. Yet, little is
known about the considerations involved in their design. In this paper, we
report on an interview study with 11 machine learning practitioners for
investigating the needs surrounding semi-automated bias detection tools. Based
on the findings, we highlight four considerations in designing to guide system
designers who aim to create future tools for bias detection.
Related papers
- A Multimodal Automated Interpretability Agent [63.8551718480664]
MAIA is a system that uses neural models to automate neural model understanding tasks.
We first characterize MAIA's ability to describe (neuron-level) features in learned representations of images.
We then show that MAIA can aid in two additional interpretability tasks: reducing sensitivity to spurious features, and automatically identifying inputs likely to be mis-classified.
arXiv Detail & Related papers (2024-04-22T17:55:11Z) - Democratize with Care: The need for fairness specific features in
user-interface based open source AutoML tools [0.0]
Automated Machine Learning (AutoML) streamlines the machine learning model development process.
This democratization allows more users (including non-experts) to access and utilize state-of-the-art machine-learning expertise.
However, AutoML tools may also propagate bias in the way these tools handle the data, model choices, and optimization approaches adopted.
arXiv Detail & Related papers (2023-12-16T19:54:00Z) - A survey on bias in machine learning research [0.0]
Current research on bias in machine learning often focuses on fairness, while overlooking the roots or causes of bias.
This article aims to bridge the gap between past literature on bias in research by providing taxonomy for potential sources of bias and errors in data and models.
arXiv Detail & Related papers (2023-08-22T07:56:57Z) - Tool Learning with Foundation Models [158.8640687353623]
With the advent of foundation models, AI systems have the potential to be equally adept in tool use as humans.
Despite its immense potential, there is still a lack of a comprehensive understanding of key challenges, opportunities, and future endeavors in this field.
arXiv Detail & Related papers (2023-04-17T15:16:10Z) - Human-Centric Multimodal Machine Learning: Recent Advances and Testbed
on AI-based Recruitment [66.91538273487379]
There is a certain consensus about the need to develop AI applications with a Human-Centric approach.
Human-Centric Machine Learning needs to be developed based on four main requirements: (i) utility and social good; (ii) privacy and data ownership; (iii) transparency and accountability; and (iv) fairness in AI-driven decision-making processes.
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
arXiv Detail & Related papers (2023-02-13T16:44:44Z) - Investigating Bias with a Synthetic Data Generator: Empirical Evidence
and Philosophical Interpretation [66.64736150040093]
Machine learning applications are becoming increasingly pervasive in our society.
Risk is that they will systematically spread the bias embedded in data.
We propose to analyze biases by introducing a framework for generating synthetic data with specific types of bias and their combinations.
arXiv Detail & Related papers (2022-09-13T11:18:50Z) - Using Personality Detection Tools for Software Engineering Research: How
Far Can We Go? [12.56413718364189]
Self-assessment questionnaires are not a practical solution for collecting multiple observations on a large scale.
Off-the-shelf solutions trained on non-technical corpora might not be readily applicable to technical domains like Software Engineering.
arXiv Detail & Related papers (2021-10-11T07:02:34Z) - The Impact of Presentation Style on Human-In-The-Loop Detection of
Algorithmic Bias [18.05880738470364]
Semi-automated bias detection tools often present reports of automatically-detected biases using a recommendation list or visual cues.
We investigated how presentation style might affect user behaviors in reviewing bias reports.
We propose information load and comprehensiveness as two axes for characterizing bias detection tasks.
arXiv Detail & Related papers (2020-04-26T14:05:23Z) - Bias in Multimodal AI: Testbed for Fair Automatic Recruitment [73.85525896663371]
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
We train automatic recruitment algorithms using a set of multimodal synthetic profiles consciously scored with gender and racial biases.
Our methodology and results show how to generate fairer AI-based tools in general, and in particular fairer automated recruitment systems.
arXiv Detail & Related papers (2020-04-15T15:58:05Z) - A survey of bias in Machine Learning through the prism of Statistical
Parity for the Adult Data Set [5.277804553312449]
We show the importance of understanding how a bias can be introduced into automatic decisions.
We first present a mathematical framework for the fair learning problem, specifically in the binary classification setting.
We then propose to quantify the presence of bias by using the standard Disparate Impact index on the real and well-known Adult income data set.
arXiv Detail & Related papers (2020-03-31T14:48:36Z) - Learning Predictive Models From Observation and Interaction [137.77887825854768]
Learning predictive models from interaction with the world allows an agent, such as a robot, to learn about how the world works.
However, learning a model that captures the dynamics of complex skills represents a major challenge.
We propose a method to augment the training set with observational data of other agents, such as humans.
arXiv Detail & Related papers (2019-12-30T01:10:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.