Towards Personalized Explanation of Robot Path Planning via User
Feedback
- URL: http://arxiv.org/abs/2011.00524v2
- Date: Fri, 5 Mar 2021 19:12:40 GMT
- Title: Towards Personalized Explanation of Robot Path Planning via User
Feedback
- Authors: Kayla Boggess, Shenghui Chen, Lu Feng
- Abstract summary: We present a system for generating personalized explanations of robot path planning via user feedback.
The system is capable of detecting and resolving any preference conflict via user interaction.
- Score: 1.7231251035416644
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Prior studies have found that explaining robot decisions and actions helps to
increase system transparency, improve user understanding, and enable effective
human-robot collaboration. In this paper, we present a system for generating
personalized explanations of robot path planning via user feedback. We consider
a robot navigating in an environment modeled as a Markov decision process
(MDP), and develop an algorithm to automatically generate a personalized
explanation of an optimal MDP policy, based on the user preference regarding
four elements (i.e., objective, locality, specificity, and corpus). In
addition, we design the system to interact with users via answering users'
further questions about the generated explanations. Users have the option to
update their preferences to view different explanations. The system is capable
of detecting and resolving any preference conflict via user interaction. The
results of an online user study show that the generated personalized
explanations improve user satisfaction, while the majority of users liked the
system's capabilities of question-answering and conflict detection/resolution.
Related papers
- Optimising Human-AI Collaboration by Learning Convincing Explanations [62.81395661556852]
We propose a method for a collaborative system that remains safe by having a human making decisions.
Ardent enables efficient and effective decision-making by adapting to individual preferences for explanations.
arXiv Detail & Related papers (2023-11-13T16:00:16Z) - Continually Improving Extractive QA via Human Feedback [59.49549491725224]
We study continually improving an extractive question answering (QA) system via human user feedback.
We conduct experiments involving thousands of user interactions under diverse setups to broaden the understanding of learning from feedback over time.
arXiv Detail & Related papers (2023-05-21T14:35:32Z) - Latent User Intent Modeling for Sequential Recommenders [92.66888409973495]
Sequential recommender models learn to predict the next items a user is likely to interact with based on his/her interaction history on the platform.
Most sequential recommenders however lack a higher-level understanding of user intents, which often drive user behaviors online.
Intent modeling is thus critical for understanding users and optimizing long-term user experience.
arXiv Detail & Related papers (2022-11-17T19:00:24Z) - What Do End-Users Really Want? Investigation of Human-Centered XAI for
Mobile Health Apps [69.53730499849023]
We present a user-centered persona concept to evaluate explainable AI (XAI)
Results show that users' demographics and personality, as well as the type of explanation, impact explanation preferences.
Our insights bring an interactive, human-centered XAI closer to practical application.
arXiv Detail & Related papers (2022-10-07T12:51:27Z) - Learning User-Interpretable Descriptions of Black-Box AI System
Capabilities [9.608555640607731]
This paper presents an approach for learning user-interpretable symbolic descriptions of the limits and capabilities of a black-box AI system.
It uses a hierarchical active querying paradigm to generate questions and to learn a user-interpretable model of the AI system based on its responses.
arXiv Detail & Related papers (2021-07-28T23:33:31Z) - Not all users are the same: Providing personalized explanations for
sequential decision making problems [25.24098967133101]
This work proposes an end-to-end adaptive explanation generation system.
It begins by learning the different types of users that the agent could interact with.
It is then tasked with identifying the type on the fly and adjust its explanations accordingly.
arXiv Detail & Related papers (2021-06-23T07:46:19Z) - A Knowledge Driven Approach to Adaptive Assistance Using Preference
Reasoning and Explanation [3.8673630752805432]
We propose the robot uses Analogical Theory of Mind to infer what the user is trying to do.
If the user is unsure or confused, the robot provides the user with an explanation.
arXiv Detail & Related papers (2020-12-05T00:18:43Z) - Soliciting Human-in-the-Loop User Feedback for Interactive Machine
Learning Reduces User Trust and Impressions of Model Accuracy [8.11839312231511]
Mixed-initiative systems allow users to interactively provide feedback to improve system performance.
Our research investigates how the act of providing feedback can affect user understanding of an intelligent system and its accuracy.
arXiv Detail & Related papers (2020-08-28T16:46:41Z) - Optimizing Interactive Systems via Data-Driven Objectives [70.3578528542663]
We propose an approach that infers the objective directly from observed user interactions.
These inferences can be made regardless of prior knowledge and across different types of user behavior.
We introduce Interactive System (ISO), a novel algorithm that uses these inferred objectives for optimization.
arXiv Detail & Related papers (2020-06-19T20:49:14Z) - Towards Transparent Robotic Planning via Contrastive Explanations [1.7231251035416644]
We formalize the notion of contrastive explanations for robotic planning policies based on Markov decision processes.
We present methods for the automated generation of contrastive explanations with three key factors: selectiveness, constrictiveness, and responsibility.
arXiv Detail & Related papers (2020-03-16T19:44:31Z) - A Neural Topical Expansion Framework for Unstructured Persona-oriented
Dialogue Generation [52.743311026230714]
Persona Exploration and Exploitation (PEE) is able to extend the predefined user persona description with semantically correlated content.
PEE consists of two main modules: persona exploration and persona exploitation.
Our approach outperforms state-of-the-art baselines in terms of both automatic and human evaluations.
arXiv Detail & Related papers (2020-02-06T08:24:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.