Subgoal-Based Explanations for Unreliable Intelligent Decision Support
Systems
- URL: http://arxiv.org/abs/2201.04204v1
- Date: Tue, 11 Jan 2022 21:13:22 GMT
- Title: Subgoal-Based Explanations for Unreliable Intelligent Decision Support
Systems
- Authors: Devleena Das, Been Kim, Sonia Chernova
- Abstract summary: We introduce a novel explanation type, subgoal-based explanations, for planning-based IDS systems.
We demonstrate that subgoal-based explanations lead to improved user task performance, improve user ability to distinguish optimal and suboptimal IDS recommendations, are preferred by users, and enable more robust user performance in the case of IDS failure.
- Score: 22.20142645430695
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Intelligent decision support (IDS) systems leverage artificial intelligence
techniques to generate recommendations that guide human users through the
decision making phases of a task. However, a key challenge is that IDS systems
are not perfect, and in complex real-world scenarios may produce incorrect
output or fail to work altogether. The field of explainable AI planning (XAIP)
has sought to develop techniques that make the decision making of sequential
decision making AI systems more explainable to end-users. Critically, prior
work in applying XAIP techniques to IDS systems has assumed that the plan being
proposed by the planner is always optimal, and therefore the action or plan
being recommended as decision support to the user is always correct. In this
work, we examine novice user interactions with a non-robust IDS system -- one
that occasionally recommends the wrong action, and one that may become
unavailable after users have become accustomed to its guidance. We introduce a
novel explanation type, subgoal-based explanations, for planning-based IDS
systems, that supplements traditional IDS output with information about the
subgoal toward which the recommended action would contribute. We demonstrate
that subgoal-based explanations lead to improved user task performance, improve
user ability to distinguish optimal and suboptimal IDS recommendations, are
preferred by users, and enable more robust user performance in the case of IDS
failure
Related papers
- Dynamic Information Sub-Selection for Decision Support [5.063114309794011]
We introduce Dynamic Information Sub-Selection (DISS), a novel framework of AI assistance designed to enhance the performance of black-box decision-makers.
We explore several applications of DISS, including biased decision-maker support, expert assignment optimization, large language model decision support, and interpretability.
arXiv Detail & Related papers (2024-10-30T20:00:54Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Tell Me More! Towards Implicit User Intention Understanding of Language
Model Driven Agents [110.25679611755962]
Current language model-driven agents often lack mechanisms for effective user participation, which is crucial given the vagueness commonly found in user instructions.
We introduce Intention-in-Interaction (IN3), a novel benchmark designed to inspect users' implicit intentions through explicit queries.
We empirically train Mistral-Interact, a powerful model that proactively assesses task vagueness, inquires user intentions, and refines them into actionable goals.
arXiv Detail & Related papers (2024-02-14T14:36:30Z) - Optimising Human-AI Collaboration by Learning Convincing Explanations [62.81395661556852]
We propose a method for a collaborative system that remains safe by having a human making decisions.
Ardent enables efficient and effective decision-making by adapting to individual preferences for explanations.
arXiv Detail & Related papers (2023-11-13T16:00:16Z) - InstructTODS: Large Language Models for End-to-End Task-Oriented
Dialogue Systems [60.53276524369498]
Large language models (LLMs) have been used for diverse tasks in natural language processing (NLP)
We present InstructTODS, a novel framework for zero-shot end-to-end task-oriented dialogue systems.
InstructTODS generates a proxy belief state that seamlessly translates user intentions into dynamic queries.
arXiv Detail & Related papers (2023-10-13T06:36:26Z) - Decision Rule Elicitation for Domain Adaptation [93.02675868486932]
Human-in-the-loop machine learning is widely used in artificial intelligence (AI) to elicit labels from experts.
In this work, we allow experts to additionally produce decision rules describing their decision-making.
We show that decision rule elicitation improves domain adaptation of the algorithm and helps to propagate expert's knowledge to the AI model.
arXiv Detail & Related papers (2021-02-23T08:07:22Z) - RADAR-X: An Interactive Mixed Initiative Planning Interface Pairing
Contrastive Explanations and Revised Plan Suggestions [30.98066157540983]
We present our decision support system RADAR-X that showcases the ability to engage the user in an interactive explanatory dialogue.
The system uses this dialogue to elicit the user's latent preferences and provides revised plan suggestions through three different interaction strategies.
arXiv Detail & Related papers (2020-11-19T04:18:38Z) - Towards Personalized Explanation of Robot Path Planning via User
Feedback [1.7231251035416644]
We present a system for generating personalized explanations of robot path planning via user feedback.
The system is capable of detecting and resolving any preference conflict via user interaction.
arXiv Detail & Related papers (2020-11-01T15:10:43Z) - Optimizing Interactive Systems via Data-Driven Objectives [70.3578528542663]
We propose an approach that infers the objective directly from observed user interactions.
These inferences can be made regardless of prior knowledge and across different types of user behavior.
We introduce Interactive System (ISO), a novel algorithm that uses these inferred objectives for optimization.
arXiv Detail & Related papers (2020-06-19T20:49:14Z) - Intelligent Decision Support System for Updating Control Plans [0.0]
This paper proposes an intelligent DSS for quality control planning.
The proposed RS makes it possible to continuously update the control plans in order to be adapted to the actual process quality situation.
A numerical application is performed in a real case study in order to illustrate the feasibility and practicability of the proposed DSS.
arXiv Detail & Related papers (2020-06-15T06:16:51Z) - Tradeoff-Focused Contrastive Explanation for MDP Planning [7.929642367937801]
In real-world applications of planning, planning agents' decisions can involve complex tradeoffs among competing objectives.
It can be difficult for the end-users to understand why an agent decides on a particular planning solution on the basis of its objective values.
We propose an approach, based on contrastive explanation, that enables a multi-objective MDP planning agent to explain its decisions in a way that communicates its tradeoff rationale.
arXiv Detail & Related papers (2020-04-27T17:17:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.