Dual Goal Representations
- URL: http://arxiv.org/abs/2510.06714v1
- Date: Wed, 08 Oct 2025 07:07:39 GMT
- Title: Dual Goal Representations
- Authors: Seohong Park, Deepinder Mann, Sergey Levine,
- Abstract summary: We introduce dual goal representations for goal-conditioned reinforcement learning (GCRL)<n>A dual goal representation characterizes a state by "the set of temporal distances from all other states"<n>We empirically show that dual goal representations consistently improve offline goal-reaching performance across 20 state- and pixel-based tasks.
- Score: 57.43956630070019
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we introduce dual goal representations for goal-conditioned reinforcement learning (GCRL). A dual goal representation characterizes a state by "the set of temporal distances from all other states"; in other words, it encodes a state through its relations to every other state, measured by temporal distance. This representation provides several appealing theoretical properties. First, it depends only on the intrinsic dynamics of the environment and is invariant to the original state representation. Second, it contains provably sufficient information to recover an optimal goal-reaching policy, while being able to filter out exogenous noise. Based on this concept, we develop a practical goal representation learning method that can be combined with any existing GCRL algorithm. Through diverse experiments on the OGBench task suite, we empirically show that dual goal representations consistently improve offline goal-reaching performance across 20 state- and pixel-based tasks.
Related papers
- Offline Goal-conditioned Reinforcement Learning with Quasimetric Representations [72.24831946301613]
Approaches for goal-conditioned reinforcement learning (GCRL) often use learned state representations to extract goal-reaching policies.<n>We propose an approach that unifies these two frameworks, using the structure of a quasimetric representation space (triangle inequality) with the right additional constraints to learn successor representations that enable optimal goal-reaching.<n>Our approach is able to exploit a **quasimetric** distance parameterization to learn **optimal** goal-reaching distances, even with **suboptimal** data and in **stochastic** environments.
arXiv Detail & Related papers (2025-09-24T18:45:32Z) - GoViG: Goal-Conditioned Visual Navigation Instruction Generation [69.79110149746506]
We introduce Goal-Conditioned Visual Navigation Instruction Generation (GoViG), a new task that aims to autonomously generate precise and contextually coherent navigation instructions.<n>GoViG exclusively leverages raw egocentric visual data, substantially improving its adaptability to unseen and unstructured environments.
arXiv Detail & Related papers (2025-08-13T07:05:17Z) - A Unifying Framework for Action-Conditional Self-Predictive Reinforcement Learning [48.59516337905877]
Learning a good representation is a crucial challenge for Reinforcement Learning (RL) agents.
Recent work has developed theoretical insights into these algorithms.
We take a step towards bridging the gap between theory and practice by analyzing an action-conditional self-predictive objective.
arXiv Detail & Related papers (2024-06-04T07:22:12Z) - PcLast: Discovering Plannable Continuous Latent States [24.78767380808056]
We learn a representation that associates reachable states together for effective planning and goal-conditioned policy learning.
Our proposals are rigorously tested in various simulation testbeds.
arXiv Detail & Related papers (2023-11-06T21:16:37Z) - Goal Space Abstraction in Hierarchical Reinforcement Learning via
Set-Based Reachability Analysis [0.5409704301731713]
We introduce a Feudal HRL algorithm that concurrently learns both the goal representation and a hierarchical policy.
We evaluate our approach on complex navigation tasks, showing the learned representation is interpretable, transferrable and results in data efficient learning.
arXiv Detail & Related papers (2023-09-14T12:39:26Z) - Neural Distillation as a State Representation Bottleneck in
Reinforcement Learning [4.129225533930966]
We argue that distillation can be used to learn a state representation displaying favorable characteristics.
We first evaluate these criteria and verify the contribution of distillation on state representation on a toy environment based on the standard inverted pendulum problem.
arXiv Detail & Related papers (2022-10-05T13:00:39Z) - Goal-Conditioned Q-Learning as Knowledge Distillation [136.79415677706612]
We explore a connection between off-policy reinforcement learning in goal-conditioned settings and knowledge distillation.
We empirically show that this can improve the performance of goal-conditioned off-policy reinforcement learning when the space of goals is high-dimensional.
We also show that this technique can be adapted to allow for efficient learning in the case of multiple simultaneous sparse goals.
arXiv Detail & Related papers (2022-08-28T22:01:10Z) - Bisimulation Makes Analogies in Goal-Conditioned Reinforcement Learning [71.52722621691365]
Building generalizable goal-conditioned agents from rich observations is a key to reinforcement learning (RL) solving real world problems.
We propose a new form of state abstraction called goal-conditioned bisimulation.
We learn this representation using a metric form of this abstraction, and show its ability to generalize to new goals in simulation manipulation tasks.
arXiv Detail & Related papers (2022-04-27T17:00:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.