Explainable Active Learning (XAL): An Empirical Study of How Local
Explanations Impact Annotator Experience
- URL: http://arxiv.org/abs/2001.09219v4
- Date: Wed, 30 Sep 2020 12:43:28 GMT
- Title: Explainable Active Learning (XAL): An Empirical Study of How Local
Explanations Impact Annotator Experience
- Authors: Bhavya Ghai, Q. Vera Liao, Yunfeng Zhang, Rachel Bellamy, Klaus
Mueller
- Abstract summary: We propose a novel paradigm of explainable active learning (XAL), by introducing techniques from the recently surging field of explainable AI (XAI) into an Active Learning setting.
Our study shows benefits of AI explanation as interfaces for machine teaching--supporting trust calibration and enabling rich forms of teaching feedback, and potential drawbacks--anchoring effect with the model judgment and cognitive workload.
- Score: 76.9910678786031
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The wide adoption of Machine Learning technologies has created a rapidly
growing demand for people who can train ML models. Some advocated the term
"machine teacher" to refer to the role of people who inject domain knowledge
into ML models. One promising learning paradigm is Active Learning (AL), by
which the model intelligently selects instances to query the machine teacher
for labels. However, in current AL settings, the human-AI interface remains
minimal and opaque. We begin considering AI explanations as a core element of
the human-AI interface for teaching machines. When a human student learns, it
is a common pattern to present one's own reasoning and solicit feedback from
the teacher. When a ML model learns and still makes mistakes, the human teacher
should be able to understand the reasoning underlying the mistakes. When the
model matures, the machine teacher should be able to recognize its progress in
order to trust and feel confident about their teaching outcome. Toward this
vision, we propose a novel paradigm of explainable active learning (XAL), by
introducing techniques from the recently surging field of explainable AI (XAI)
into an AL setting. We conducted an empirical study comparing the model
learning outcomes, feedback content and experience with XAL, to that of
traditional AL and coactive learning (providing the model's prediction without
the explanation). Our study shows benefits of AI explanation as interfaces for
machine teaching--supporting trust calibration and enabling rich forms of
teaching feedback, and potential drawbacks--anchoring effect with the model
judgment and cognitive workload. Our study also reveals important individual
factors that mediate a machine teacher's reception to AI explanations,
including task knowledge, AI experience and need for cognition. By reflecting
on the results, we suggest future directions and design implications for XAL.
Related papers
- Democratizing Reasoning Ability: Tailored Learning from Large Language
Model [97.4921006089966]
We propose a tailored learning approach to distill such reasoning ability to smaller LMs.
We exploit the potential of LLM as a reasoning teacher by building an interactive multi-round learning paradigm.
To exploit the reasoning potential of the smaller LM, we propose self-reflection learning to motivate the student to learn from self-made mistakes.
arXiv Detail & Related papers (2023-10-20T07:50:10Z) - Learning by Self-Explaining [23.420673675343266]
We introduce a novel workflow in the context of image classification, termed Learning by Self-Explaining (LSX)
LSX utilizes aspects of self-refining AI and human-guided explanatory machine learning.
Our results indicate improvements via Learning by Self-Explaining on several levels.
arXiv Detail & Related papers (2023-09-15T13:41:57Z) - Responsibility: An Example-based Explainable AI approach via Training
Process Inspection [1.4610038284393165]
We present a novel XAI approach that identifies the most responsible training example for a particular decision.
This example can then be shown as an explanation: "this is what I (the AI) learned that led me to do that"
Our results demonstrate that responsibility can help improve accuracy for both human end users and secondary ML models.
arXiv Detail & Related papers (2022-09-07T19:30:01Z) - Learning to Scaffold: Optimizing Model Explanations for Teaching [74.25464914078826]
We train models on three natural language processing and computer vision tasks.
We find that students trained with explanations extracted with our framework are able to simulate the teacher significantly more effectively than ones produced with previous methods.
arXiv Detail & Related papers (2022-04-22T16:43:39Z) - Iterative Teacher-Aware Learning [136.05341445369265]
In human pedagogy, teachers and students can interact adaptively to maximize communication efficiency.
We propose a gradient optimization based teacher-aware learner who can incorporate teacher's cooperative intention into the likelihood function.
arXiv Detail & Related papers (2021-10-01T00:27:47Z) - Teaching the Machine to Explain Itself using Domain Knowledge [4.462334751640166]
Non-technical humans-in-the-loop struggle to comprehend the rationale behind model predictions.
We present JOEL, a neural network-based framework to jointly learn a decision-making task and associated explanations.
We collect the domain feedback from a pool of certified experts and use it to ameliorate the model (human teaching)
arXiv Detail & Related papers (2020-11-27T18:46:34Z) - Explainability via Responsibility [0.9645196221785693]
We present an approach to explainable artificial intelligence in which certain training instances are offered to human users.
We evaluate this approach by approximating its ability to provide human users with the explanations of AI agent's actions.
arXiv Detail & Related papers (2020-10-04T20:41:03Z) - Explainability in Deep Reinforcement Learning [68.8204255655161]
We review recent works in the direction to attain Explainable Reinforcement Learning (XRL)
In critical situations where it is essential to justify and explain the agent's behaviour, better explainability and interpretability of RL models could help gain scientific insight on the inner workings of what is still considered a black box.
arXiv Detail & Related papers (2020-08-15T10:11:42Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z) - Deceptive AI Explanations: Creation and Detection [3.197020142231916]
We investigate how AI models can be used to create and detect deceptive explanations.
As an empirical evaluation, we focus on text classification and alter the explanations generated by GradCAM.
We evaluate the effect of deceptive explanations on users in an experiment with 200 participants.
arXiv Detail & Related papers (2020-01-21T16:41:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.