Learning telic-controllable state representations
- URL: http://arxiv.org/abs/2406.14476v2
- Date: Tue, 16 Jul 2024 23:20:17 GMT
- Title: Learning telic-controllable state representations
- Authors: Nadav Amir, Stas Tiomkin, Angela Langdon,
- Abstract summary: We present a novel computational framework for state representation learning in bounded agents.
Our work advances a unified theoretical perspective on goal-directed state representation learning in natural and artificial agents.
- Score: 3.072340427031969
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Computational descriptions of purposeful behavior comprise both descriptive and normative} aspects. The former are used to ascertain current (or future) states of the world and the latter to evaluate the desirability, or lack thereof, of these states under some goal. In Reinforcement Learning, the normative aspect (reward and value functions) is assumed to depend on a predefined and fixed descriptive one (state representation). Alternatively, these two aspects may emerge interdependently: goals can be, and indeed often are, approximated by state-dependent reward functions, but they may also shape the acquired state representations themselves. Here, we present a novel computational framework for state representation learning in bounded agents, where descriptive and normative aspects are coupled through the notion of goal-directed, or telic, states. We introduce the concept of telic controllability to characterize the tradeoff between the granularity of a telic state representation and the policy complexity required to reach all telic states. We propose an algorithm for learning controllable state representations, illustrating it using a simple navigation task with shifting goals. Our framework highlights the crucial role of deliberate ignorance -- knowing which features of experience to ignore -- for learning state representations that balance goal flexibility and policy complexity. More broadly, our work advances a unified theoretical perspective on goal-directed state representation learning in natural and artificial agents.
Related papers
- Learning with Language-Guided State Abstractions [58.199148890064826]
Generalizable policy learning in high-dimensional observation spaces is facilitated by well-designed state representations.
Our method, LGA, uses a combination of natural language supervision and background knowledge from language models to automatically build state representations tailored to unseen tasks.
Experiments on simulated robotic tasks show that LGA yields state abstractions similar to those designed by humans, but in a fraction of the time.
arXiv Detail & Related papers (2024-02-28T23:57:04Z) - Learning Interpretable Policies in Hindsight-Observable POMDPs through
Partially Supervised Reinforcement Learning [57.67629402360924]
We introduce the Partially Supervised Reinforcement Learning (PSRL) framework.
At the heart of PSRL is the fusion of both supervised and unsupervised learning.
We show that PSRL offers a potent balance, enhancing model interpretability while preserving, and often significantly outperforming, the performance benchmarks set by traditional methods.
arXiv Detail & Related papers (2024-02-14T16:23:23Z) - Inverse Decision Modeling: Learning Interpretable Representations of
Behavior [72.80902932543474]
We develop an expressive, unifying perspective on inverse decision modeling.
We use this to formalize the inverse problem (as a descriptive model)
We illustrate how this structure enables learning (interpretable) representations of (bounded) rationality.
arXiv Detail & Related papers (2023-10-28T05:05:01Z) - State Representations as Incentives for Reinforcement Learning Agents: A Sim2Real Analysis on Robotic Grasping [3.4777703321218225]
This work examines the effect of various representations in incentivizing the agent to solve a specific robotic task.
A continuum of state representations is defined, starting from hand-crafted numerical states to encoded image-based representations.
The effects of each representation on the ability of the agent to solve the task in simulation and the transferability of the learned policy to the real robot are examined.
arXiv Detail & Related papers (2023-09-21T11:41:22Z) - Neural Distillation as a State Representation Bottleneck in
Reinforcement Learning [4.129225533930966]
We argue that distillation can be used to learn a state representation displaying favorable characteristics.
We first evaluate these criteria and verify the contribution of distillation on state representation on a toy environment based on the standard inverted pendulum problem.
arXiv Detail & Related papers (2022-10-05T13:00:39Z) - Bisimulation Makes Analogies in Goal-Conditioned Reinforcement Learning [71.52722621691365]
Building generalizable goal-conditioned agents from rich observations is a key to reinforcement learning (RL) solving real world problems.
We propose a new form of state abstraction called goal-conditioned bisimulation.
We learn this representation using a metric form of this abstraction, and show its ability to generalize to new goals in simulation manipulation tasks.
arXiv Detail & Related papers (2022-04-27T17:00:11Z) - On the Generalization of Representations in Reinforcement Learning [32.303656009679045]
We provide an informative bound on the generalization error arising from a specific state representation.
Our bound applies to any state representation and quantifies the natural tension between representations that generalize well and those that approximate well.
arXiv Detail & Related papers (2022-03-01T15:22:09Z) - Interpretable Reinforcement Learning with Multilevel Subgoal Discovery [77.34726150561087]
We propose a novel Reinforcement Learning model for discrete environments.
In the model, an agent learns information about environment in the form of probabilistic rules.
No reward function is required for learning; an agent only needs to be given a primary goal to achieve.
arXiv Detail & Related papers (2022-02-15T14:04:44Z) - Value Function Spaces: Skill-Centric State Abstractions for Long-Horizon
Reasoning [120.38381203153159]
Reinforcement learning can train policies that effectively perform complex tasks.
For long-horizon tasks, the performance of these methods degrades with horizon, often necessitating reasoning over and composing lower-level skills.
We propose Value Function Spaces: a simple approach that produces such a representation by using the value functions corresponding to each lower-level skill.
arXiv Detail & Related papers (2021-11-04T22:46:16Z) - Towards Learning Controllable Representations of Physical Systems [9.088303226909279]
Learned representations of dynamical systems reduce dimensionality, potentially supporting downstream reinforcement learning (RL)
We consider the relationship between the true state and the corresponding representations, proposing that ideally each representation corresponds to a unique state.
These metrics are shown to predict reinforcement learning performance in a simulated peg-in-hole task when comparing variants of autoencoder-based representations.
arXiv Detail & Related papers (2020-11-16T17:15:57Z) - An Overview of Natural Language State Representation for Reinforcement
Learning [17.285206913252786]
A suitable state representation is a fundamental part of the learning process in Reinforcement Learning.
This survey outlines the strategies used in the literature to build natural language state representations.
arXiv Detail & Related papers (2020-07-19T20:15:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.