Causal Curiosity: RL Agents Discovering Self-supervised Experiments for
Causal Representation Learning
- URL: http://arxiv.org/abs/2010.03110v4
- Date: Fri, 6 Aug 2021 21:53:05 GMT
- Title: Causal Curiosity: RL Agents Discovering Self-supervised Experiments for
Causal Representation Learning
- Authors: Sumedh A. Sontakke, Arash Mehrjou, Laurent Itti, Bernhard Sch\"olkopf
- Abstract summary: We introduce em causal curiosity, a novel intrinsic reward.
We show that it allows our agents to learn optimal sequences of actions.
We also show that the knowledge of causal factor representations aids zero-shot learning for more complex tasks.
- Score: 24.163616087447874
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Animals exhibit an innate ability to learn regularities of the world through
interaction. By performing experiments in their environment, they are able to
discern the causal factors of variation and infer how they affect the world's
dynamics. Inspired by this, we attempt to equip reinforcement learning agents
with the ability to perform experiments that facilitate a categorization of the
rolled-out trajectories, and to subsequently infer the causal factors of the
environment in a hierarchical manner. We introduce {\em causal curiosity}, a
novel intrinsic reward, and show that it allows our agents to learn optimal
sequences of actions and discover causal factors in the dynamics of the
environment. The learned behavior allows the agents to infer a binary quantized
representation for the ground-truth causal factors in every environment.
Additionally, we find that these experimental behaviors are semantically
meaningful (e.g., our agents learn to lift blocks to categorize them by
weight), and are learnt in a self-supervised manner with approximately 2.5
times less data than conventional supervised planners. We show that these
behaviors can be re-purposed and fine-tuned (e.g., from lifting to pushing or
other downstream tasks). Finally, we show that the knowledge of causal factor
representations aids zero-shot learning for more complex tasks. Visit
https://sites.google.com/usc.edu/causal-curiosity/home for website.
Related papers
- Variable-Agnostic Causal Exploration for Reinforcement Learning [56.52768265734155]
We introduce a novel framework, Variable-Agnostic Causal Exploration for Reinforcement Learning (VACERL)
Our approach automatically identifies crucial observation-action steps associated with key variables using attention mechanisms.
It constructs the causal graph connecting these steps, which guides the agent towards observation-action pairs with greater causal influence on task completion.
arXiv Detail & Related papers (2024-07-17T09:45:27Z) - Why Online Reinforcement Learning is Causal [31.59766909722592]
Reinforcement learning (RL) and causal modelling naturally complement each other.
This paper examines which reinforcement learning settings we can expect to benefit from causal modelling.
arXiv Detail & Related papers (2024-03-07T04:49:48Z) - Generative agents in the streets: Exploring the use of Large Language
Models (LLMs) in collecting urban perceptions [0.0]
This study explores the current advancements in Generative agents powered by large language models (LLMs)
The experiment employs Generative agents to interact with the urban environments using street view images to plan their journey toward specific goals.
Since LLMs do not possess embodiment, nor have access to the visual realm, and lack a sense of motion or direction, we designed movement and visual modules that help agents gain an overall understanding of surroundings.
arXiv Detail & Related papers (2023-12-20T15:45:54Z) - Sim-to-Real Causal Transfer: A Metric Learning Approach to
Causally-Aware Interaction Representations [62.48505112245388]
We take an in-depth look at the causal awareness of modern representations of agent interactions.
We show that recent representations are already partially resilient to perturbations of non-causal agents.
We propose a metric learning approach that regularizes latent representations with causal annotations.
arXiv Detail & Related papers (2023-12-07T18:57:03Z) - Observing Interventions: A logic for thinking about experiments [62.997667081978825]
This paper makes a first step towards a logic of learning from experiments.
Crucial for our approach is the idea that the notion of an intervention can be used as a formal expression of a (real or hypothetical) experiment.
For all the proposed logical systems, we provide a sound and complete axiomatization.
arXiv Detail & Related papers (2021-11-25T09:26:45Z) - Systematic Evaluation of Causal Discovery in Visual Model Based
Reinforcement Learning [76.00395335702572]
A central goal for AI and causality is the joint discovery of abstract representations and causal structure.
Existing environments for studying causal induction are poorly suited for this objective because they have complicated task-specific causal graphs.
In this work, our goal is to facilitate research in learning representations of high-level variables as well as causal structures among them.
arXiv Detail & Related papers (2021-07-02T05:44:56Z) - Noisy Agents: Self-supervised Exploration by Predicting Auditory Events [127.82594819117753]
We propose a novel type of intrinsic motivation for Reinforcement Learning (RL) that encourages the agent to understand the causal effect of its actions.
We train a neural network to predict the auditory events and use the prediction errors as intrinsic rewards to guide RL exploration.
Experimental results on Atari games show that our new intrinsic motivation significantly outperforms several state-of-the-art baselines.
arXiv Detail & Related papers (2020-07-27T17:59:08Z) - Show me the Way: Intrinsic Motivation from Demonstrations [44.87651595571687]
We show that complex exploration behaviors, reflecting different motivations, can be learnt and efficiently used by RL agents to solve tasks for which exhaustive exploration is prohibitive.
We propose to learn an exploration bonus from demonstrations that could transfer these motivations to an artificial agent with little assumptions about their rationale.
arXiv Detail & Related papers (2020-06-23T11:52:53Z) - Learning to Incentivize Other Learning Agents [73.03133692589532]
We show how to equip RL agents with the ability to give rewards directly to other agents, using a learned incentive function.
Such agents significantly outperform standard RL and opponent-shaping agents in challenging general-sum Markov games.
Our work points toward more opportunities and challenges along the path to ensure the common good in a multi-agent future.
arXiv Detail & Related papers (2020-06-10T20:12:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.