Towards Responsible AI: A Design Space Exploration of Human-Centered
Artificial Intelligence User Interfaces to Investigate Fairness
- URL: http://arxiv.org/abs/2206.00474v1
- Date: Wed, 1 Jun 2022 13:08:37 GMT
- Title: Towards Responsible AI: A Design Space Exploration of Human-Centered
Artificial Intelligence User Interfaces to Investigate Fairness
- Authors: Yuri Nakao and Lorenzo Strappelli and Simone Stumpf and Aisha Naseer
and Daniele Regoli and Giulia Del Gamba
- Abstract summary: We provide a design space exploration that supports data scientists and domain experts to investigate AI fairness.
Using loan applications as an example, we held a series of workshops with loan officers and data scientists to elicit their requirements.
We instantiated these requirements into FairHIL, a UI to support human-in-the-loop fairness investigations, and describe how this UI could be generalized to other use cases.
- Score: 4.40401067183266
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With Artificial intelligence (AI) to aid or automate decision-making
advancing rapidly, a particular concern is its fairness. In order to create
reliable, safe and trustworthy systems through human-centred artificial
intelligence (HCAI) design, recent efforts have produced user interfaces (UIs)
for AI experts to investigate the fairness of AI models. In this work, we
provide a design space exploration that supports not only data scientists but
also domain experts to investigate AI fairness. Using loan applications as an
example, we held a series of workshops with loan officers and data scientists
to elicit their requirements. We instantiated these requirements into FairHIL,
a UI to support human-in-the-loop fairness investigations, and describe how
this UI could be generalized to other use cases. We evaluated FairHIL through a
think-aloud user study. Our work contributes better designs to investigate an
AI model's fairness-and move closer towards responsible AI.
Related papers
- The Ethics of Advanced AI Assistants [53.89899371095332]
This paper focuses on the opportunities and the ethical and societal risks posed by advanced AI assistants.
We define advanced AI assistants as artificial agents with natural language interfaces, whose function is to plan and execute sequences of actions on behalf of a user.
We consider the deployment of advanced assistants at a societal scale, focusing on cooperation, equity and access, misinformation, economic impact, the environment and how best to evaluate advanced AI assistants.
arXiv Detail & Related papers (2024-04-24T23:18:46Z) - BO-Muse: A human expert and AI teaming framework for accelerated
experimental design [58.61002520273518]
Our algorithm lets the human expert take the lead in the experimental process.
We show that our algorithm converges sub-linearly, at a rate faster than the AI or human alone.
arXiv Detail & Related papers (2023-03-03T02:56:05Z) - AI Usage Cards: Responsibly Reporting AI-generated Content [25.848910414962337]
Given AI systems like ChatGPT can generate content that is indistinguishable from human-made work, the responsible use of this technology is a growing concern.
We propose a three-dimensional model consisting of transparency, integrity, and accountability to define the responsible use of AI.
Second, we introduce AI Usage Cards'', a standardized way to report the use of AI in scientific research.
arXiv Detail & Related papers (2023-02-16T08:41:31Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - On the Influence of Explainable AI on Automation Bias [0.0]
We aim to shed light on the potential to influence automation bias by explainable AI (XAI)
We conduct an online experiment with regard to hotel review classifications and discuss first results.
arXiv Detail & Related papers (2022-04-19T12:54:23Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - A User-Centred Framework for Explainable Artificial Intelligence in
Human-Robot Interaction [70.11080854486953]
We propose a user-centred framework for XAI that focuses on its social-interactive aspect.
The framework aims to provide a structure for interactive XAI solutions thought for non-expert users.
arXiv Detail & Related papers (2021-09-27T09:56:23Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Questioning the AI: Informing Design Practices for Explainable AI User
Experiences [33.81809180549226]
A surge of interest in explainable AI (XAI) has led to a vast collection of algorithmic work on the topic.
We seek to identify gaps between the current XAI algorithmic work and practices to create explainable AI products.
We develop an algorithm-informed XAI question bank in which user needs for explainability are represented.
arXiv Detail & Related papers (2020-01-08T12:34:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.