ASQ-IT: Interactive Explanations for Reinforcement-Learning Agents
- URL: http://arxiv.org/abs/2301.09941v1
- Date: Tue, 24 Jan 2023 11:57:37 GMT
- Title: ASQ-IT: Interactive Explanations for Reinforcement-Learning Agents
- Authors: Yotam Amitai, Guy Avni and Ofra Amir
- Abstract summary: We present ASQ-IT -- an interactive tool that presents video clips of the agent acting in its environment based on queries given by the user that describe temporal properties of behaviors of interest.
Our approach is based on formal methods: queries in ASQ-IT's user interface map to a fragment of Linear Temporal Logic over finite traces (LTLf), which we developed, and our algorithm for query processing is based on automata theory.
- Score: 7.9603223299524535
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: As reinforcement learning methods increasingly amass accomplishments, the
need for comprehending their solutions becomes more crucial. Most explainable
reinforcement learning (XRL) methods generate a static explanation depicting
their developers' intuition of what should be explained and how. In contrast,
literature from the social sciences proposes that meaningful explanations are
structured as a dialog between the explainer and the explainee, suggesting a
more active role for the user and her communication with the agent. In this
paper, we present ASQ-IT -- an interactive tool that presents video clips of
the agent acting in its environment based on queries given by the user that
describe temporal properties of behaviors of interest. Our approach is based on
formal methods: queries in ASQ-IT's user interface map to a fragment of Linear
Temporal Logic over finite traces (LTLf), which we developed, and our algorithm
for query processing is based on automata theory. User studies show that
end-users can understand and formulate queries in ASQ-IT, and that using ASQ-IT
assists users in identifying faulty agent behaviors.
Related papers
- An Ontology-Enabled Approach For User-Centered and Knowledge-Enabled Explanations of AI Systems [0.3480973072524161]
Recent research in explainability has focused on explaining the workings of AI models or model explainability.
This thesis seeks to bridge some gaps between model and user-centered explainability.
arXiv Detail & Related papers (2024-10-23T02:03:49Z) - Constraining Participation: Affordances of Feedback Features in Interfaces to Large Language Models [49.74265453289855]
Large language models (LLMs) are now accessible to anyone with a computer, a web browser, and an internet connection via browser-based interfaces.
This paper examines the affordances of interactive feedback features in ChatGPT's interface, analysing how they shape user input and participation in iteration.
arXiv Detail & Related papers (2024-08-27T13:50:37Z) - LLMCheckup: Conversational Examination of Large Language Models via Interpretability Tools and Self-Explanations [26.340786701393768]
Interpretability tools that offer explanations in the form of a dialogue have demonstrated their efficacy in enhancing users' understanding.
Current solutions for dialogue-based explanations, however, often require external tools and modules and are not easily transferable to tasks they were not designed for.
We present an easily accessible tool that allows users to chat with any state-of-the-art large language model (LLM) about its behavior.
arXiv Detail & Related papers (2024-01-23T09:11:07Z) - Pangu-Agent: A Fine-Tunable Generalist Agent with Structured Reasoning [50.47568731994238]
Key method for creating Artificial Intelligence (AI) agents is Reinforcement Learning (RL)
This paper presents a general framework model for integrating and learning structured reasoning into AI agents' policies.
arXiv Detail & Related papers (2023-12-22T17:57:57Z) - An In-Context Schema Understanding Method for Knowledge Base Question
Answering [70.87993081445127]
Large Language Models (LLMs) have shown strong capabilities in language understanding and can be used to solve this task.
Existing methods bypass this challenge by initially employing LLMs to generate drafts of logic forms without schema-specific details.
We propose a simple In-Context Understanding (ICSU) method that enables LLMs to directly understand schemas by leveraging in-context learning.
arXiv Detail & Related papers (2023-10-22T04:19:17Z) - FIND: A Function Description Benchmark for Evaluating Interpretability
Methods [86.80718559904854]
This paper introduces FIND (Function INterpretation and Description), a benchmark suite for evaluating automated interpretability methods.
FIND contains functions that resemble components of trained neural networks, and accompanying descriptions of the kind we seek to generate.
We evaluate methods that use pretrained language models to produce descriptions of function behavior in natural language and code.
arXiv Detail & Related papers (2023-09-07T17:47:26Z) - AVIS: Autonomous Visual Information Seeking with Large Language Model
Agent [123.75169211547149]
We propose an autonomous information seeking visual question answering framework, AVIS.
Our method leverages a Large Language Model (LLM) to dynamically strategize the utilization of external tools.
AVIS achieves state-of-the-art results on knowledge-intensive visual question answering benchmarks such as Infoseek and OK-VQA.
arXiv Detail & Related papers (2023-06-13T20:50:22Z) - Semantic Interactive Learning for Text Classification: A Constructive
Approach for Contextual Interactions [0.0]
We propose a novel interaction framework called Semantic Interactive Learning for the text domain.
We frame the problem of incorporating constructive and contextual feedback into the learner as a task to find an architecture that enables more semantic alignment between humans and machines.
We introduce a technique called SemanticPush that is effective for translating conceptual corrections of humans to non-extrapolating training examples.
arXiv Detail & Related papers (2022-09-07T08:13:45Z) - elBERto: Self-supervised Commonsense Learning for Question Answering [131.51059870970616]
We propose a Self-supervised Bidirectional Representation Learning of Commonsense framework, which is compatible with off-the-shelf QA model architectures.
The framework comprises five self-supervised tasks to force the model to fully exploit the additional training signals from contexts containing rich commonsense.
elBERto achieves substantial improvements on out-of-paragraph and no-effect questions where simple lexical similarity comparison does not help.
arXiv Detail & Related papers (2022-03-17T16:23:45Z) - Generating User-Centred Explanations via Illocutionary Question
Answering: From Philosophy to Interfaces [3.04585143845864]
We show a new approach for the generation of interactive explanations based on a sophisticated pipeline of AI algorithms.
Our contribution is an approach to frame illocution in a computer-friendly way, to achieve user-centrality with statistical question answering.
We tested our hypotheses with a user-study involving more than 60 participants, on two XAI-based systems.
arXiv Detail & Related papers (2021-10-02T09:06:36Z) - From Philosophy to Interfaces: an Explanatory Method and a Tool Inspired
by Achinstein's Theory of Explanation [3.04585143845864]
We propose a new method for explanations in Artificial Intelligence (AI)
We show a new approach for the generation of interactive explanations based on a pipeline of AI algorithms.
We tested our hypothesis on a well-known XAI-powered credit approval system by IBM.
arXiv Detail & Related papers (2021-09-09T11:10:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.