Deep Interpretable Models of Theory of Mind For Human-Agent Teaming
- URL: http://arxiv.org/abs/2104.02938v1
- Date: Wed, 7 Apr 2021 06:18:58 GMT
- Title: Deep Interpretable Models of Theory of Mind For Human-Agent Teaming
- Authors: Ini Oguntola, Dana Hughes, Katia Sycara
- Abstract summary: We develop an interpretable modular neural framework for modeling the intentions of other observed entities.
We demonstrate the efficacy of our approach with experiments on data from human participants on a search and rescue task in Minecraft.
- Score: 0.7734726150561086
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: When developing AI systems that interact with humans, it is essential to
design both a system that can understand humans, and a system that humans can
understand. Most deep network based agent-modeling approaches are 1) not
interpretable and 2) only model external behavior, ignoring internal mental
states, which potentially limits their capability for assistance,
interventions, discovering false beliefs, etc. To this end, we develop an
interpretable modular neural framework for modeling the intentions of other
observed entities. We demonstrate the efficacy of our approach with experiments
on data from human participants on a search and rescue task in Minecraft, and
show that incorporating interpretability can significantly increase predictive
performance under the right conditions.
Related papers
- Approximating Human Models During Argumentation-based Dialogues [4.178382980763478]
Key challenge in Explainable AI Planning (XAIP) is model reconciliation.
We propose a novel framework that enables AI agents to learn and update a probabilistic human model.
arXiv Detail & Related papers (2024-05-28T23:22:18Z) - Human Uncertainty in Concept-Based AI Systems [37.82747673914624]
We study human uncertainty in the context of concept-based AI systems.
We show that training with uncertain concept labels may help mitigate weaknesses in concept-based systems.
arXiv Detail & Related papers (2023-03-22T19:17:57Z) - DIME: Fine-grained Interpretations of Multimodal Models via Disentangled
Local Explanations [119.1953397679783]
We focus on advancing the state-of-the-art in interpreting multimodal models.
Our proposed approach, DIME, enables accurate and fine-grained analysis of multimodal models.
arXiv Detail & Related papers (2022-03-03T20:52:47Z) - Machine Explanations and Human Understanding [31.047297225560566]
Explanations are hypothesized to improve human understanding of machine learning models.
empirical studies have found mixed and even negative results.
We show how human intuitions play a central role in enabling human understanding.
arXiv Detail & Related papers (2022-02-08T19:00:38Z) - Backprop-Free Reinforcement Learning with Active Neural Generative
Coding [84.11376568625353]
We propose a computational framework for learning action-driven generative models without backpropagation of errors (backprop) in dynamic environments.
We develop an intelligent agent that operates even with sparse rewards, drawing inspiration from the cognitive theory of planning as inference.
The robust performance of our agent offers promising evidence that a backprop-free approach for neural inference and learning can drive goal-directed behavior.
arXiv Detail & Related papers (2021-07-10T19:02:27Z) - Human-Understandable Decision Making for Visual Recognition [30.30163407674527]
We propose a new framework to train a deep neural network by incorporating the prior of human perception into the model learning process.
The effectiveness of our proposed model is evaluated on two classical visual recognition tasks.
arXiv Detail & Related papers (2021-03-05T02:07:33Z) - AGENT: A Benchmark for Core Psychological Reasoning [60.35621718321559]
Intuitive psychology is the ability to reason about hidden mental variables that drive observable actions.
Despite recent interest in machine agents that reason about other agents, it is not clear if such agents learn or hold the core psychology principles that drive human reasoning.
We present a benchmark consisting of procedurally generated 3D animations, AGENT, structured around four scenarios.
arXiv Detail & Related papers (2021-02-24T14:58:23Z) - Machine Common Sense [77.34726150561087]
Machine common sense remains a broad, potentially unbounded problem in artificial intelligence (AI)
This article deals with the aspects of modeling commonsense reasoning focusing on such domain as interpersonal interactions.
arXiv Detail & Related papers (2020-06-15T13:59:47Z) - Learning to Complement Humans [67.38348247794949]
A rising vision for AI in the open world centers on the development of systems that can complement humans for perceptual, diagnostic, and reasoning tasks.
We demonstrate how an end-to-end learning strategy can be harnessed to optimize the combined performance of human-machine teams.
arXiv Detail & Related papers (2020-05-01T20:00:23Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.