Imagination-Augmented Deep Learning for Goal Recognition
- URL: http://arxiv.org/abs/2003.09529v1
- Date: Fri, 20 Mar 2020 23:07:34 GMT
- Title: Imagination-Augmented Deep Learning for Goal Recognition
- Authors: Thibault Duhamel, Mariane Maynard and Froduald Kabanza
- Abstract summary: A prominent idea in current goal-recognition research is to infer the likelihood of an agent's goal from the estimations of the costs of plans to the different goals the agent might have.
This paper introduces a novel idea of using a symbolic planner to compute plan-cost insights, which augment a deep neural network with an imagination capability.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Being able to infer the goal of people we observe, interact with, or read
stories about is one of the hallmarks of human intelligence. A prominent idea
in current goal-recognition research is to infer the likelihood of an agent's
goal from the estimations of the costs of plans to the different goals the
agent might have. Different approaches implement this idea by relying only on
handcrafted symbolic representations. Their application to real-world settings
is, however, quite limited, mainly because extracting rules for the factors
that influence goal-oriented behaviors remains a complicated task. In this
paper, we introduce a novel idea of using a symbolic planner to compute
plan-cost insights, which augment a deep neural network with an imagination
capability, leading to improved goal recognition accuracy in real and synthetic
domains compared to a symbolic recognizer or a deep-learning goal recognizer
alone.
Related papers
- Human Goal Recognition as Bayesian Inference: Investigating the Impact
of Actions, Timing, and Goal Solvability [7.044125601403849]
We use a Bayesian framework to explore the role of actions, timing, and goal solvability in goal recognition.
Our work provides new insight into human goal recognition and takes a step towards more human-like AI models.
arXiv Detail & Related papers (2024-02-16T08:55:23Z) - Goal Space Abstraction in Hierarchical Reinforcement Learning via
Set-Based Reachability Analysis [0.5409704301731713]
We introduce a Feudal HRL algorithm that concurrently learns both the goal representation and a hierarchical policy.
We evaluate our approach on complex navigation tasks, showing the learned representation is interpretable, transferrable and results in data efficient learning.
arXiv Detail & Related papers (2023-09-14T12:39:26Z) - Augmenting Autotelic Agents with Large Language Models [24.16977502082188]
We introduce a language model augmented autotelic agent (LMA3)
LMA3 supports the representation, generation and learning of diverse, abstract, human-relevant goals.
We show that LMA3 agents learn to master a large diversity of skills in a task-agnostic text-based environment.
arXiv Detail & Related papers (2023-05-21T15:42:41Z) - Discrete Factorial Representations as an Abstraction for Goal
Conditioned Reinforcement Learning [99.38163119531745]
We show that applying a discretizing bottleneck can improve performance in goal-conditioned RL setups.
We experimentally prove the expected return on out-of-distribution goals, while still allowing for specifying goals with expressive structure.
arXiv Detail & Related papers (2022-11-01T03:31:43Z) - Bisimulation Makes Analogies in Goal-Conditioned Reinforcement Learning [71.52722621691365]
Building generalizable goal-conditioned agents from rich observations is a key to reinforcement learning (RL) solving real world problems.
We propose a new form of state abstraction called goal-conditioned bisimulation.
We learn this representation using a metric form of this abstraction, and show its ability to generalize to new goals in simulation manipulation tasks.
arXiv Detail & Related papers (2022-04-27T17:00:11Z) - Understanding the origin of information-seeking exploration in
probabilistic objectives for control [62.997667081978825]
An exploration-exploitation trade-off is central to the description of adaptive behaviour.
One approach to solving this trade-off has been to equip or propose that agents possess an intrinsic 'exploratory drive'
We show that this combination of utility maximizing and information-seeking behaviour arises from the minimization of an entirely difference class of objectives.
arXiv Detail & Related papers (2021-03-11T18:42:39Z) - AGENT: A Benchmark for Core Psychological Reasoning [60.35621718321559]
Intuitive psychology is the ability to reason about hidden mental variables that drive observable actions.
Despite recent interest in machine agents that reason about other agents, it is not clear if such agents learn or hold the core psychology principles that drive human reasoning.
We present a benchmark consisting of procedurally generated 3D animations, AGENT, structured around four scenarios.
arXiv Detail & Related papers (2021-02-24T14:58:23Z) - Learning with AMIGo: Adversarially Motivated Intrinsic Goals [63.680207855344875]
AMIGo is a goal-generating teacher that proposes Adversarially Motivated Intrinsic Goals.
We show that our method generates a natural curriculum of self-proposed goals which ultimately allows the agent to solve challenging procedurally-generated tasks.
arXiv Detail & Related papers (2020-06-22T10:22:08Z) - Automatic Curriculum Learning through Value Disagreement [95.19299356298876]
Continually solving new, unsolved tasks is the key to learning diverse behaviors.
In the multi-task domain, where an agent needs to reach multiple goals, the choice of training goals can largely affect sample efficiency.
We propose setting up an automatic curriculum for goals that the agent needs to solve.
We evaluate our method across 13 multi-goal robotic tasks and 5 navigation tasks, and demonstrate performance gains over current state-of-the-art methods.
arXiv Detail & Related papers (2020-06-17T03:58:25Z) - Language as a Cognitive Tool to Imagine Goals in Curiosity-Driven
Exploration [15.255795563999422]
Developmental machine learning studies how artificial agents can model the way children learn open-ended repertoires of skills.
We argue that the ability to imagine out-of-distribution goals is key to enable creative discoveries and open-ended learning.
We introduce the Playground environment and study how this form of goal imagination improves generalization and exploration over agents lacking this capacity.
arXiv Detail & Related papers (2020-02-21T12:59:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.