"Why is 'Chicago' deceptive?" Towards Building Model-Driven Tutorials
for Humans
- URL: http://arxiv.org/abs/2001.05871v1
- Date: Tue, 14 Jan 2020 19:00:00 GMT
- Title: "Why is 'Chicago' deceptive?" Towards Building Model-Driven Tutorials
for Humans
- Authors: Vivian Lai, Han Liu, Chenhao Tan
- Abstract summary: We explore model-driven tutorials to help humans understand machine predictions.
We find that tutorials indeed improve human performance, with and without real-time assistance.
Our work suggests future directions for human-centered tutorials and explanations towards a synergy between humans and AI.
- Score: 19.32935518528528
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To support human decision making with machine learning models, we often need
to elucidate patterns embedded in the models that are unsalient, unknown, or
counterintuitive to humans. While existing approaches focus on explaining
machine predictions with real-time assistance, we explore model-driven
tutorials to help humans understand these patterns in a training phase. We
consider both tutorials with guidelines from scientific papers, analogous to
current practices of science communication, and automatically selected examples
from training data with explanations. We use deceptive review detection as a
testbed and conduct large-scale, randomized human-subject experiments to
examine the effectiveness of such tutorials. We find that tutorials indeed
improve human performance, with and without real-time assistance. In
particular, although deep learning provides superior predictive performance
than simple models, tutorials and explanations from simple models are more
useful to humans. Our work suggests future directions for human-centered
tutorials and explanations towards a synergy between humans and AI.
Related papers
- Object-Oriented Transition Modeling with Inductive Logic Programming [4.560623715441945]
We develop a novel learning algorithm that is substantially more powerful than previous methods.<n>Our thorough experiments, including ablation tests and comparison with neural baselines, demonstrate a significant improvement over the state-of-the-art.
arXiv Detail & Related papers (2026-02-07T16:11:53Z) - Evaluating the Utility of Model Explanations for Model Development [54.23538543168767]
We evaluate whether explanations can improve human decision-making in practical scenarios of machine learning model development.
To our surprise, we did not find evidence of significant improvement on tasks when users were provided with any of the saliency maps.
These findings suggest caution regarding the usefulness and potential for misunderstanding in saliency-based explanations.
arXiv Detail & Related papers (2023-12-10T23:13:23Z) - Understanding Your Agent: Leveraging Large Language Models for Behavior
Explanation [7.647395374489533]
We propose an approach to generate natural language explanations for an agent's behavior based only on observations of states and actions.
We show that our approach generates explanations as helpful as those produced by a human domain expert.
arXiv Detail & Related papers (2023-11-29T20:16:23Z) - Enhancing Robot Learning through Learned Human-Attention Feature Maps [6.724036710994883]
We think that embedding auxiliary information about focus point into robot learning would enhance efficiency and robustness of the learning process.
In this paper, we propose a novel approach to model and emulate the human attention with an approximate prediction model.
We test our approach on two learning tasks - object detection and imitation learning.
arXiv Detail & Related papers (2023-08-29T14:23:44Z) - Modeling Human Behavior Part I -- Learning and Belief Approaches [0.0]
We focus on techniques which learn a model or policy of behavior through exploration and feedback.
Next generation autonomous and adaptive systems will largely include AI agents and humans working together as teams.
arXiv Detail & Related papers (2022-05-13T07:33:49Z) - Learning to Scaffold: Optimizing Model Explanations for Teaching [74.25464914078826]
We train models on three natural language processing and computer vision tasks.
We find that students trained with explanations extracted with our framework are able to simulate the teacher significantly more effectively than ones produced with previous methods.
arXiv Detail & Related papers (2022-04-22T16:43:39Z) - Explain, Edit, and Understand: Rethinking User Study Design for
Evaluating Model Explanations [97.91630330328815]
We conduct a crowdsourcing study, where participants interact with deception detection models that have been trained to distinguish between genuine and fake hotel reviews.
We observe that for a linear bag-of-words model, participants with access to the feature coefficients during training are able to cause a larger reduction in model confidence in the testing phase when compared to the no-explanation control.
arXiv Detail & Related papers (2021-12-17T18:29:56Z) - Widening the Pipeline in Human-Guided Reinforcement Learning with
Explanation and Context-Aware Data Augmentation [20.837228359591663]
We present the first study of using human visual explanations in human-in-the-loop reinforcement learning.
We propose EXPAND to encourage the model to encode task-relevant features through a context-aware data augmentation.
arXiv Detail & Related papers (2020-06-26T05:40:05Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z) - Explainable Active Learning (XAL): An Empirical Study of How Local
Explanations Impact Annotator Experience [76.9910678786031]
We propose a novel paradigm of explainable active learning (XAL), by introducing techniques from the recently surging field of explainable AI (XAI) into an Active Learning setting.
Our study shows benefits of AI explanation as interfaces for machine teaching--supporting trust calibration and enabling rich forms of teaching feedback, and potential drawbacks--anchoring effect with the model judgment and cognitive workload.
arXiv Detail & Related papers (2020-01-24T22:52:18Z) - Learning Predictive Models From Observation and Interaction [137.77887825854768]
Learning predictive models from interaction with the world allows an agent, such as a robot, to learn about how the world works.
However, learning a model that captures the dynamics of complex skills represents a major challenge.
We propose a method to augment the training set with observational data of other agents, such as humans.
arXiv Detail & Related papers (2019-12-30T01:10:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.