Human-Centered Explainable AI (XAI): From Algorithms to User Experiences
- URL: http://arxiv.org/abs/2110.10790v1
- Date: Wed, 20 Oct 2021 21:33:46 GMT
- Title: Human-Centered Explainable AI (XAI): From Algorithms to User Experiences
- Authors: Q. Vera Liao, Kush R. Varshney
- Abstract summary: explainable AI (XAI) has produced a vast collection of algorithms in recent years.
The field is starting to embrace inter-disciplinary perspectives and human-centered approaches.
- Score: 29.10123472973571
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As a technical sub-field of artificial intelligence (AI), explainable AI
(XAI) has produced a vast collection of algorithms in recent years. However,
explainability is an inherently human-centric property and the field is
starting to embrace inter-disciplinary perspectives and human-centered
approaches. As researchers and practitioners begin to leverage XAI algorithms
to build XAI applications, explainability has moved beyond a demand by data
scientists or researchers to comprehend the models they are developing, to
become an essential requirement for people to trust and adopt AI deployed in
numerous domains. Human-computer interaction (HCI) research and user experience
(UX) design in this area are therefore increasingly important. In this chapter,
we begin with a high-level overview of the technical landscape of XAI
algorithms, then selectively survey recent HCI work that takes human-centered
approaches to design, evaluate, provide conceptual and methodological tools for
XAI. We ask the question "what are human-centered approaches doing for XAI" and
highlight three roles that they should play in shaping XAI technologies: to
drive technical choices by understanding users' explainability needs, to
uncover pitfalls of existing XAI methods through empirical studies and inform
new methods, and to provide conceptual frameworks for human-compatible XAI.
Related papers
- Evolutionary Computation and Explainable AI: A Roadmap to Understandable Intelligent Systems [37.02462866600066]
Evolutionary computation (EC) offers significant potential to contribute to explainable AI (XAI)
This paper provides an introduction to XAI and reviews current techniques for explaining machine learning models.
We then explore how EC can be leveraged in XAI and examine existing XAI approaches that incorporate EC techniques.
arXiv Detail & Related papers (2024-06-12T02:06:24Z) - How Human-Centered Explainable AI Interface Are Designed and Evaluated: A Systematic Survey [48.97104365617498]
The emerging area of em Explainable Interfaces (EIs) focuses on the user interface and user experience design aspects of XAI.
This paper presents a systematic survey of 53 publications to identify current trends in human-XAI interaction and promising directions for EI design and development.
arXiv Detail & Related papers (2024-03-21T15:44:56Z) - OpenHEXAI: An Open-Source Framework for Human-Centered Evaluation of Explainable Machine Learning [43.87507227859493]
This paper presents OpenHEXAI, an open-source framework for human-centered evaluation of XAI methods.
OpenHEAXI is the first large-scale infrastructural effort to facilitate human-centered benchmarks of XAI methods.
arXiv Detail & Related papers (2024-02-20T22:17:59Z) - How much informative is your XAI? A decision-making assessment task to
objectively measure the goodness of explanations [53.01494092422942]
The number and complexity of personalised and user-centred approaches to XAI have rapidly grown in recent years.
It emerged that user-centred approaches to XAI positively affect the interaction between users and systems.
We propose an assessment task to objectively and quantitatively measure the goodness of XAI systems.
arXiv Detail & Related papers (2023-12-07T15:49:39Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - Towards Human-centered Explainable AI: A Survey of User Studies for Model Explanations [18.971689499890363]
We identify and analyze 97core papers with human-based XAI evaluations over the past five years.
Our research shows that XAI is spreading more rapidly in certain application domains, such as recommender systems.
We propose practical guidelines on designing and conducting user studies for XAI researchers and practitioners.
arXiv Detail & Related papers (2022-10-20T20:53:00Z) - Connecting Algorithmic Research and Usage Contexts: A Perspective of
Contextualized Evaluation for Explainable AI [65.44737844681256]
A lack of consensus on how to evaluate explainable AI (XAI) hinders the advancement of the field.
We argue that one way to close the gap is to develop evaluation methods that account for different user requirements.
arXiv Detail & Related papers (2022-06-22T05:17:33Z) - On some Foundational Aspects of Human-Centered Artificial Intelligence [52.03866242565846]
There is no clear definition of what is meant by Human Centered Artificial Intelligence.
This paper introduces the term HCAI agent to refer to any physical or software computational agent equipped with AI components.
We see the notion of HCAI agent, together with its components and functions, as a way to bridge the technical and non-technical discussions on human-centered AI.
arXiv Detail & Related papers (2021-12-29T09:58:59Z) - A User-Centred Framework for Explainable Artificial Intelligence in
Human-Robot Interaction [70.11080854486953]
We propose a user-centred framework for XAI that focuses on its social-interactive aspect.
The framework aims to provide a structure for interactive XAI solutions thought for non-expert users.
arXiv Detail & Related papers (2021-09-27T09:56:23Z) - Opportunities and Challenges in Explainable Artificial Intelligence
(XAI): A Survey [2.7086321720578623]
Black-box nature of deep neural networks challenges its use in mission critical applications.
XAI promotes a set of tools, techniques, and algorithms that can generate high-quality interpretable, intuitive, human-understandable explanations of AI decisions.
arXiv Detail & Related papers (2020-06-16T02:58:10Z) - Human-centered Explainable AI: Towards a Reflective Sociotechnical
Approach [18.14698948294366]
We introduce Human-centered Explainable AI (HCXAI) as an approach that puts the human at the center of technology design.
It develops a holistic understanding of "who" the human is by considering the interplay of values, interpersonal dynamics, and the socially situated nature of AI systems.
arXiv Detail & Related papers (2020-02-04T02:30:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.