Exploring the Impact of Lay User Feedback for Improving AI Fairness
- URL: http://arxiv.org/abs/2312.08064v2
- Date: Mon, 18 Dec 2023 14:35:54 GMT
- Title: Exploring the Impact of Lay User Feedback for Improving AI Fairness
- Authors: Evdoxia Taka, Yuri Nakao, Ryosuke Sonoda, Takuya Yokota, Lin Luo,
Simone Stumpf
- Abstract summary: Engaging stakeholders, especially lay users, in fair AI development is promising yet overlooked.
Recent efforts explore enabling lay users to provide AI fairness-related feedback, but there is still a lack of understanding of how to integrate users' feedback into an AI model.
Our work contributes baseline results of integrating user fairness feedback in XGBoost, and a dataset and code framework to bootstrap research in engaging stakeholders in AI fairness.
- Score: 4.2646936031015645
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fairness in AI is a growing concern for high-stakes decision making. Engaging
stakeholders, especially lay users, in fair AI development is promising yet
overlooked. Recent efforts explore enabling lay users to provide AI
fairness-related feedback, but there is still a lack of understanding of how to
integrate users' feedback into an AI model and the impacts of doing so. To
bridge this gap, we collected feedback from 58 lay users on the fairness of a
XGBoost model trained on the Home Credit dataset, and conducted offline
experiments to investigate the effects of retraining models on accuracy, and
individual and group fairness. Our work contributes baseline results of
integrating user fairness feedback in XGBoost, and a dataset and code framework
to bootstrap research in engaging stakeholders in AI fairness. Our discussion
highlights the challenges of employing user feedback in AI fairness and points
the way to a future application area of interactive machine learning.
Related papers
- Should XAI Nudge Human Decisions with Explanation Biasing? [2.6396287656676725]
This paper reviews our previous trials of Nudge-XAI, an approach that introduces automatic biases into explanations from explainable AIs (XAIs)
Nudge-XAI uses a user model that predicts the influence of providing an explanation or emphasizing it and attempts to guide users toward AI-suggested decisions without coercion.
arXiv Detail & Related papers (2024-06-11T14:53:07Z) - Improving Generalization of Alignment with Human Preferences through
Group Invariant Learning [56.19242260613749]
Reinforcement Learning from Human Feedback (RLHF) enables the generation of responses more aligned with human preferences.
Previous work shows that Reinforcement Learning (RL) often exploits shortcuts to attain high rewards and overlooks challenging samples.
We propose a novel approach that can learn a consistent policy via RL across various data groups or domains.
arXiv Detail & Related papers (2023-10-18T13:54:15Z) - UltraFeedback: Boosting Language Models with Scaled AI Feedback [99.4633351133207]
We present textscUltraFeedback, a large-scale, high-quality, and diversified AI feedback dataset.
Our work validates the effectiveness of scaled AI feedback data in constructing strong open-source chat language models.
arXiv Detail & Related papers (2023-10-02T17:40:01Z) - Continually Improving Extractive QA via Human Feedback [59.49549491725224]
We study continually improving an extractive question answering (QA) system via human user feedback.
We conduct experiments involving thousands of user interactions under diverse setups to broaden the understanding of learning from feedback over time.
arXiv Detail & Related papers (2023-05-21T14:35:32Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Advancing Human-AI Complementarity: The Impact of User Expertise and
Algorithmic Tuning on Joint Decision Making [10.890854857970488]
Many factors can impact success of Human-AI teams, including a user's domain expertise, mental models of an AI system, trust in recommendations, and more.
Our study examined user performance in a non-trivial blood vessel labeling task where participants indicated whether a given blood vessel was flowing or stalled.
Our results show that while recommendations from an AI-Assistant can aid user decision making, factors such as users' baseline performance relative to the AI and complementary tuning of AI error types significantly impact overall team performance.
arXiv Detail & Related papers (2022-08-16T21:39:58Z) - Toward Supporting Perceptual Complementarity in Human-AI Collaboration
via Reflection on Unobservables [7.3043497134309145]
We conduct an online experiment to understand whether and how explicitly communicating potentially relevant unobservables influences how people integrate model outputs and unobservables when making predictions.
Our findings indicate that presenting prompts about unobservables can change how humans integrate model outputs and unobservables, but do not necessarily lead to improved performance.
arXiv Detail & Related papers (2022-07-28T00:05:14Z) - Let's Go to the Alien Zoo: Introducing an Experimental Framework to
Study Usability of Counterfactual Explanations for Machine Learning [6.883906273999368]
Counterfactual explanations (CFEs) have gained traction as a psychologically grounded approach to generate post-hoc explanations.
We introduce the Alien Zoo, an engaging, web-based and game-inspired experimental framework.
As a proof of concept, we demonstrate the practical efficacy and feasibility of this approach in a user study.
arXiv Detail & Related papers (2022-05-06T17:57:05Z) - Towards Involving End-users in Interactive Human-in-the-loop AI Fairness [1.889930012459365]
Ensuring fairness in artificial intelligence (AI) is important to counteract bias and discrimination in far-reaching applications.
Recent work has started to investigate how humans judge fairness and how to support machine learning (ML) experts in making their AI models fairer.
Our work explores designing interpretable and interactive human-in-the-loop interfaces that allow ordinary end-users to identify potential fairness issues.
arXiv Detail & Related papers (2022-04-22T02:24:11Z) - Mind Your Outliers! Investigating the Negative Impact of Outliers on
Active Learning for Visual Question Answering [71.15403434929915]
We show that across 5 models and 4 datasets on the task of visual question answering, a wide variety of active learning approaches fail to outperform random selection.
We identify the problem as collective outliers -- groups of examples that active learning methods prefer to acquire but models fail to learn.
We show that active learning sample efficiency increases significantly as the number of collective outliers in the active learning pool decreases.
arXiv Detail & Related papers (2021-07-06T00:52:11Z) - Mining Implicit Entity Preference from User-Item Interaction Data for
Knowledge Graph Completion via Adversarial Learning [82.46332224556257]
We propose a novel adversarial learning approach by leveraging user interaction data for the Knowledge Graph Completion task.
Our generator is isolated from user interaction data, and serves to improve the performance of the discriminator.
To discover implicit entity preference of users, we design an elaborate collaborative learning algorithms based on graph neural networks.
arXiv Detail & Related papers (2020-03-28T05:47:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.