ConvXAI: Delivering Heterogeneous AI Explanations via Conversations to
Support Human-AI Scientific Writing
- URL: http://arxiv.org/abs/2305.09770v6
- Date: Fri, 27 Oct 2023 16:08:32 GMT
- Title: ConvXAI: Delivering Heterogeneous AI Explanations via Conversations to
Support Human-AI Scientific Writing
- Authors: Hua Shen, Chieh-Yang Huang, Tongshuang Wu, Ting-Hao 'Kenneth' Huang
- Abstract summary: This paper focuses on Conversational XAI for AI-assisted scientific writing tasks.
We identify four design rationales: "multifaceted", "controllability", "mix-initiative", "context-aware drill-down"
We incorporate them into an interactive prototype, ConvXAI, which facilitates heterogeneous AI explanations for scientific writing through dialogue.
- Score: 45.187790784934734
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite a surge collection of XAI methods, users still struggle to obtain
required AI explanations. Previous research suggests chatbots as dynamic
solutions, but the effective design of conversational XAI agents for practical
human needs remains under-explored. This paper focuses on Conversational XAI
for AI-assisted scientific writing tasks. Drawing from human linguistic
theories and formative studies, we identify four design rationales:
"multifaceted", "controllability", "mix-initiative", "context-aware
drill-down". We incorporate them into an interactive prototype, ConvXAI, which
facilitates heterogeneous AI explanations for scientific writing through
dialogue. In two studies with 21 users, ConvXAI outperforms a GUI-based
baseline on improving human-perceived understanding and writing improvement.
The paper further discusses the practical human usage patterns in interacting
with ConvXAI for scientific co-writing.
Related papers
- Investigating the Role of Explainability and AI Literacy in User Compliance [2.8623940003518156]
We find that users' compliance increases with the introduction of XAI but is also affected by AI literacy.
We also find that the relationships between AI literacy XAI and users' compliance are mediated by the users' mental model of AI.
arXiv Detail & Related papers (2024-06-18T14:28:12Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - What Do End-Users Really Want? Investigation of Human-Centered XAI for
Mobile Health Apps [69.53730499849023]
We present a user-centered persona concept to evaluate explainable AI (XAI)
Results show that users' demographics and personality, as well as the type of explanation, impact explanation preferences.
Our insights bring an interactive, human-centered XAI closer to practical application.
arXiv Detail & Related papers (2022-10-07T12:51:27Z) - "Help Me Help the AI": Understanding How Explainability Can Support
Human-AI Interaction [22.00514030715286]
We conducted a study of a real-world AI application via interviews with 20 end-users of Merlin, a bird-identification app.
We found that people express a need for practically useful information that can improve their collaboration with the AI system.
We also assessed end-users' perceptions of existing XAI approaches, finding that they prefer part-based explanations.
arXiv Detail & Related papers (2022-10-02T20:17:11Z) - On some Foundational Aspects of Human-Centered Artificial Intelligence [52.03866242565846]
There is no clear definition of what is meant by Human Centered Artificial Intelligence.
This paper introduces the term HCAI agent to refer to any physical or software computational agent equipped with AI components.
We see the notion of HCAI agent, together with its components and functions, as a way to bridge the technical and non-technical discussions on human-centered AI.
arXiv Detail & Related papers (2021-12-29T09:58:59Z) - A User-Centred Framework for Explainable Artificial Intelligence in
Human-Robot Interaction [70.11080854486953]
We propose a user-centred framework for XAI that focuses on its social-interactive aspect.
The framework aims to provide a structure for interactive XAI solutions thought for non-expert users.
arXiv Detail & Related papers (2021-09-27T09:56:23Z) - Designer-User Communication for XAI: An epistemological approach to
discuss XAI design [4.169915659794568]
We take the Signifying Message as our conceptual tool to structure and discuss XAI scenarios.
We experiment with its use for the discussion of a healthcare AI-System.
arXiv Detail & Related papers (2021-05-17T13:18:57Z) - Joint Mind Modeling for Explanation Generation in Complex Human-Robot
Collaborative Tasks [83.37025218216888]
We propose a novel explainable AI (XAI) framework for achieving human-like communication in human-robot collaborations.
The robot builds a hierarchical mind model of the human user and generates explanations of its own mind as a form of communications.
Results show that the generated explanations of our approach significantly improves the collaboration performance and user perception of the robot.
arXiv Detail & Related papers (2020-07-24T23:35:03Z) - Questioning the AI: Informing Design Practices for Explainable AI User
Experiences [33.81809180549226]
A surge of interest in explainable AI (XAI) has led to a vast collection of algorithmic work on the topic.
We seek to identify gaps between the current XAI algorithmic work and practices to create explainable AI products.
We develop an algorithm-informed XAI question bank in which user needs for explainability are represented.
arXiv Detail & Related papers (2020-01-08T12:34:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.