What Would You Ask the Machine Learning Model? Identification of User
Needs for Model Explanations Based on Human-Model Conversations
- URL: http://arxiv.org/abs/2002.05674v3
- Date: Fri, 31 Jul 2020 13:49:43 GMT
- Title: What Would You Ask the Machine Learning Model? Identification of User
Needs for Model Explanations Based on Human-Model Conversations
- Authors: Micha{\l} Ku\'zba, Przemys{\l}aw Biecek
- Abstract summary: This study is the first to use a conversational system to collect the needs of human operators from the interactive and iterative dialogue explorations of a predictive model.
We developed dr_ant to talk about machine learning model trained to predict survival odds on Titanic.
Having collected a corpus of 1000+ dialogues, we analyse the most common types of questions that users would like to ask.
- Score: 5.802346990263708
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently we see a rising number of methods in the field of eXplainable
Artificial Intelligence. To our surprise, their development is driven by model
developers rather than a study of needs for human end users. The analysis of
needs, if done, takes the form of an A/B test rather than a study of open
questions. To answer the question "What would a human operator like to ask the
ML model?" we propose a conversational system explaining decisions of the
predictive model. In this experiment, we developed a chatbot called dr_ant to
talk about machine learning model trained to predict survival odds on Titanic.
People can talk with dr_ant about different aspects of the model to understand
the rationale behind its predictions. Having collected a corpus of 1000+
dialogues, we analyse the most common types of questions that users would like
to ask. To our knowledge, it is the first study which uses a conversational
system to collect the needs of human operators from the interactive and
iterative dialogue explorations of a predictive model.
Related papers
- Human Learning by Model Feedback: The Dynamics of Iterative Prompting
with Midjourney [28.39697076030535]
This paper analyzes the dynamics of the user prompts along such iterations.
We show that prompts predictably converge toward specific traits along these iterations.
The possibility that users adapt to the model's preference raises concerns about reusing user data for further training.
arXiv Detail & Related papers (2023-11-20T19:28:52Z) - The System Model and the User Model: Exploring AI Dashboard Design [79.81291473899591]
We argue that sophisticated AI systems should have dashboards, just like all other complicated devices.
We conjecture that, for many systems, the two most important models will be of the user and of the system itself.
Finding ways to identify, interpret, and display these two models should be a core part of interface research for AI.
arXiv Detail & Related papers (2023-05-04T00:22:49Z) - Robots-Dont-Cry: Understanding Falsely Anthropomorphic Utterances in
Dialog Systems [64.10696852552103]
Highly anthropomorphic responses might make users uncomfortable or implicitly deceive them into thinking they are interacting with a human.
We collect human ratings on the feasibility of approximately 900 two-turn dialogs sampled from 9 diverse data sources.
arXiv Detail & Related papers (2022-10-22T12:10:44Z) - TalkToModel: Understanding Machine Learning Models With Open Ended
Dialogues [45.25552547278378]
TalkToModel is an open-ended dialogue system for understanding machine learning models.
It comprises three key components: 1) a natural language interface for engaging in dialogues, 2) a dialogue engine that interprets natural language, and 3) an execution component that runs the operations and ensures explanations are accurate.
arXiv Detail & Related papers (2022-07-08T23:42:56Z) - Ditch the Gold Standard: Re-evaluating Conversational Question Answering [9.194536300785481]
We conduct the first large-scale human evaluation of state-of-the-art CQA systems.
We find that the distribution of human-machine conversations differs drastically from that of human-human conversations.
We propose a question rewriting mechanism based on predicted history, which better correlates with human judgments.
arXiv Detail & Related papers (2021-12-16T11:57:56Z) - Model Elicitation through Direct Questioning [22.907680615911755]
We show how a robot can interact to localize the human model from a set of models.
We show how to generate questions to refine the robot's understanding of the teammate's model.
arXiv Detail & Related papers (2020-11-24T18:17:16Z) - The Adapter-Bot: All-In-One Controllable Conversational Model [66.48164003532484]
We propose a dialogue model that uses a fixed backbone model such as DialGPT and triggers on-demand dialogue skills via different adapters.
Depending on the skills, the model is able to process multiple knowledge types, such as text, tables, and emphatic responses.
We evaluate our model using automatic evaluation by comparing it with existing state-of-the-art conversational models.
arXiv Detail & Related papers (2020-08-28T10:59:31Z) - Investigation of Sentiment Controllable Chatbot [50.34061353512263]
In this paper, we investigate four models to scale or adjust the sentiment of the response.
The models are a persona-based model, reinforcement learning, a plug and play model, and CycleGAN.
We develop machine-evaluated metrics to estimate whether the responses are reasonable given the input.
arXiv Detail & Related papers (2020-07-11T16:04:30Z) - Knowledge Injection into Dialogue Generation via Language Models [85.65843021510521]
InjK is a two-stage approach to inject knowledge into a dialogue generation model.
First, we train a large-scale language model and query it as textual knowledge.
Second, we frame a dialogue generation model to sequentially generate textual knowledge and a corresponding response.
arXiv Detail & Related papers (2020-04-30T07:31:24Z) - Recipes for building an open-domain chatbot [44.75975649076827]
Good conversation requires engaging talking points and listening to their partners, and displaying knowledge, empathy and personality appropriately.
We show that large scale models can learn these skills when given appropriate training data and choice of generation strategy.
We build variants of these recipes with 90M, 2.7B and 9.4B parameter models, and make our models and code publicly available.
arXiv Detail & Related papers (2020-04-28T16:33:25Z) - Learning Predictive Models From Observation and Interaction [137.77887825854768]
Learning predictive models from interaction with the world allows an agent, such as a robot, to learn about how the world works.
However, learning a model that captures the dynamics of complex skills represents a major challenge.
We propose a method to augment the training set with observational data of other agents, such as humans.
arXiv Detail & Related papers (2019-12-30T01:10:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.