Not all users are the same: Providing personalized explanations for
sequential decision making problems
- URL: http://arxiv.org/abs/2106.12207v1
- Date: Wed, 23 Jun 2021 07:46:19 GMT
- Title: Not all users are the same: Providing personalized explanations for
sequential decision making problems
- Authors: Utkarsh Soni, Sarath Sreedharan, Subbarao Kambhampati
- Abstract summary: This work proposes an end-to-end adaptive explanation generation system.
It begins by learning the different types of users that the agent could interact with.
It is then tasked with identifying the type on the fly and adjust its explanations accordingly.
- Score: 25.24098967133101
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: There is a growing interest in designing autonomous agents that can work
alongside humans. Such agents will undoubtedly be expected to explain their
behavior and decisions. While generating explanations is an actively researched
topic, most works tend to focus on methods that generate explanations that are
one size fits all. As in the specifics of the user-model are completely
ignored. The handful of works that look at tailoring their explanation to the
user's background rely on having specific models of the users (either analytic
models or learned labeling models). The goal of this work is thus to propose an
end-to-end adaptive explanation generation system that begins by learning the
different types of users that the agent could interact with. Then during the
interaction with the target user, it is tasked with identifying the type on the
fly and adjust its explanations accordingly. The former is achieved by a
data-driven clustering approach while for the latter, we compile our
explanation generation problem into a POMDP. We demonstrate the usefulness of
our system on two domains using state-of-the-art POMDP solvers. We also report
the results of a user study that investigates the benefits of providing
personalized explanations in a human-robot interaction setting.
Related papers
- Tell Me More! Towards Implicit User Intention Understanding of Language
Model Driven Agents [110.25679611755962]
Current language model-driven agents often lack mechanisms for effective user participation, which is crucial given the vagueness commonly found in user instructions.
We introduce Intention-in-Interaction (IN3), a novel benchmark designed to inspect users' implicit intentions through explicit queries.
We empirically train Mistral-Interact, a powerful model that proactively assesses task vagueness, inquires user intentions, and refines them into actionable goals.
arXiv Detail & Related papers (2024-02-14T14:36:30Z) - Evaluating the Utility of Model Explanations for Model Development [54.23538543168767]
We evaluate whether explanations can improve human decision-making in practical scenarios of machine learning model development.
To our surprise, we did not find evidence of significant improvement on tasks when users were provided with any of the saliency maps.
These findings suggest caution regarding the usefulness and potential for misunderstanding in saliency-based explanations.
arXiv Detail & Related papers (2023-12-10T23:13:23Z) - Understanding Your Agent: Leveraging Large Language Models for Behavior
Explanation [7.647395374489533]
We propose an approach to generate natural language explanations for an agent's behavior based only on observations of states and actions.
We show that our approach generates explanations as helpful as those produced by a human domain expert.
arXiv Detail & Related papers (2023-11-29T20:16:23Z) - What if you said that differently?: How Explanation Formats Affect Human Feedback Efficacy and User Perception [53.4840989321394]
We analyze the effect of rationales generated by QA models to support their answers.
We present users with incorrect answers and corresponding rationales in various formats.
We measure the effectiveness of this feedback in patching these rationales through in-context learning.
arXiv Detail & Related papers (2023-11-16T04:26:32Z) - On Generative Agents in Recommendation [58.42840923200071]
Agent4Rec is a user simulator in recommendation based on Large Language Models.
Each agent interacts with personalized recommender models in a page-by-page manner.
arXiv Detail & Related papers (2023-10-16T06:41:16Z) - AgentCF: Collaborative Learning with Autonomous Language Agents for
Recommender Systems [112.76941157194544]
We propose AgentCF for simulating user-item interactions in recommender systems through agent-based collaborative filtering.
We creatively consider not only users but also items as agents, and develop a collaborative learning approach that optimize both kinds of agents together.
Overall, the optimized agents exhibit diverse interaction behaviors within our framework, including user-item, user-user, item-item, and collective interactions.
arXiv Detail & Related papers (2023-10-13T16:37:14Z) - Relation-aware Heterogeneous Graph for User Profiling [24.076585294260816]
We propose to leverage the relation-aware heterogeneous graph method for user profiling.
We adopt the query, key, and value mechanism in a transformer fashion for heterogeneous message passing.
We conduct experiments on two real-world e-commerce datasets and observe a significant performance boost of our approach.
arXiv Detail & Related papers (2021-10-14T06:59:30Z) - Towards Personalized Explanation of Robot Path Planning via User
Feedback [1.7231251035416644]
We present a system for generating personalized explanations of robot path planning via user feedback.
The system is capable of detecting and resolving any preference conflict via user interaction.
arXiv Detail & Related papers (2020-11-01T15:10:43Z) - DECE: Decision Explorer with Counterfactual Explanations for Machine
Learning Models [36.50754934147469]
We exploit the potential of counterfactual explanations to understand and explore the behavior of machine learning models.
We design DECE, an interactive visualization system that helps understand and explore a model's decisions on individual instances and data subsets.
arXiv Detail & Related papers (2020-08-19T09:44:47Z) - A Neural Topical Expansion Framework for Unstructured Persona-oriented
Dialogue Generation [52.743311026230714]
Persona Exploration and Exploitation (PEE) is able to extend the predefined user persona description with semantically correlated content.
PEE consists of two main modules: persona exploration and persona exploitation.
Our approach outperforms state-of-the-art baselines in terms of both automatic and human evaluations.
arXiv Detail & Related papers (2020-02-06T08:24:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.