Weakly Supervised Disentangled Representation for Goal-conditioned
Reinforcement Learning
- URL: http://arxiv.org/abs/2202.13624v1
- Date: Mon, 28 Feb 2022 09:05:14 GMT
- Title: Weakly Supervised Disentangled Representation for Goal-conditioned
Reinforcement Learning
- Authors: Zhifeng Qian, Mingyu You, Hongjun Zhou, Bin He
- Abstract summary: We propose a skill learning framework DR-GRL that aims to improve the sample efficiency and policy generalization.
In a weakly supervised manner, we propose a Spatial Transform AutoEncoder (STAE) to learn an interpretable and controllable representation.
We empirically demonstrate that DR-GRL significantly outperforms the previous methods in sample efficiency and policy generalization.
- Score: 15.698612710580447
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Goal-conditioned reinforcement learning is a crucial yet challenging
algorithm which enables agents to achieve multiple user-specified goals when
learning a set of skills in a dynamic environment. However, it typically
requires millions of the environmental interactions explored by agents, which
is sample-inefficient. In the paper, we propose a skill learning framework
DR-GRL that aims to improve the sample efficiency and policy generalization by
combining the Disentangled Representation learning and Goal-conditioned visual
Reinforcement Learning. In a weakly supervised manner, we propose a Spatial
Transform AutoEncoder (STAE) to learn an interpretable and controllable
representation in which different parts correspond to different object
attributes (shape, color, position). Due to the high controllability of the
representations, STAE can simply recombine and recode the representations to
generate unseen goals for agents to practice themselves. The manifold structure
of the learned representation maintains consistency with the physical position,
which is beneficial for reward calculation. We empirically demonstrate that
DR-GRL significantly outperforms the previous methods in sample efficiency and
policy generalization. In addition, DR-GRL is also easy to expand to the real
robot.
Related papers
- DEAR: Disentangled Environment and Agent Representations for Reinforcement Learning without Reconstruction [4.813546138483559]
Reinforcement Learning (RL) algorithms can learn robotic control tasks from visual observations, but they often require a large amount of data.
In this paper, we explore how the agent's knowledge of its shape can improve the sample efficiency of visual RL methods.
We propose a novel method, Disentangled Environment and Agent Representations, that uses the segmentation mask of the agent as supervision.
arXiv Detail & Related papers (2024-06-30T09:15:21Z) - Improving Reinforcement Learning Efficiency with Auxiliary Tasks in
Non-Visual Environments: A Comparison [0.0]
This study compares common auxiliary tasks based on, to the best of our knowledge, the only decoupled representation learning method for low-dimensional non-visual observations.
Our findings show that representation learning with auxiliary tasks only provides performance gains in sufficiently complex environments.
arXiv Detail & Related papers (2023-10-06T13:22:26Z) - Goal Space Abstraction in Hierarchical Reinforcement Learning via
Reachability Analysis [0.0]
We propose a developmental mechanism for subgoal discovery via an emergent representation that abstracts (i.e., groups together) sets of environment states.
We create a HRL algorithm that gradually learns this representation along with the policies and evaluate it on navigation tasks to show the learned representation is interpretable and results in data efficiency.
arXiv Detail & Related papers (2023-09-12T06:53:11Z) - Learning Common Rationale to Improve Self-Supervised Representation for
Fine-Grained Visual Recognition Problems [61.11799513362704]
We propose learning an additional screening mechanism to identify discriminative clues commonly seen across instances and classes.
We show that a common rationale detector can be learned by simply exploiting the GradCAM induced from the SSL objective.
arXiv Detail & Related papers (2023-03-03T02:07:40Z) - USER: Unified Semantic Enhancement with Momentum Contrast for Image-Text
Retrieval [115.28586222748478]
Image-Text Retrieval (ITR) aims at searching for the target instances that are semantically relevant to the given query from the other modality.
Existing approaches typically suffer from two major limitations.
arXiv Detail & Related papers (2023-01-17T12:42:58Z) - Discovering Generalizable Spatial Goal Representations via Graph-based
Active Reward Learning [17.58129740811116]
We propose a reward learning approach, Graph-based Equivalence Mappings (GEM)
GEM represents a spatial goal specification by a reward function conditioned on i) a graph indicating important spatial relationships between objects and ii) state equivalence mappings for each edge in the graph.
We show that GEM can drastically improve the generalizability of the learned goal representations over strong baselines.
arXiv Detail & Related papers (2022-11-24T18:59:06Z) - Discrete Factorial Representations as an Abstraction for Goal
Conditioned Reinforcement Learning [99.38163119531745]
We show that applying a discretizing bottleneck can improve performance in goal-conditioned RL setups.
We experimentally prove the expected return on out-of-distribution goals, while still allowing for specifying goals with expressive structure.
arXiv Detail & Related papers (2022-11-01T03:31:43Z) - Learning Multi-Objective Curricula for Deep Reinforcement Learning [55.27879754113767]
Various automatic curriculum learning (ACL) methods have been proposed to improve the sample efficiency and final performance of deep reinforcement learning (DRL)
In this paper, we propose a unified automatic curriculum learning framework to create multi-objective but coherent curricula.
In addition to existing hand-designed curricula paradigms, we further design a flexible memory mechanism to learn an abstract curriculum.
arXiv Detail & Related papers (2021-10-06T19:30:25Z) - Variational Empowerment as Representation Learning for Goal-Based
Reinforcement Learning [114.07623388322048]
We discuss how the standard goal-conditioned RL (GCRL) is encapsulated by the objective variational empowerment.
Our work lays a novel foundation from which to evaluate, analyze, and develop representation learning techniques in goal-based RL.
arXiv Detail & Related papers (2021-06-02T18:12:26Z) - Guided Variational Autoencoder for Disentanglement Learning [79.02010588207416]
We propose an algorithm, guided variational autoencoder (Guided-VAE), that is able to learn a controllable generative model by performing latent representation disentanglement learning.
We design an unsupervised strategy and a supervised strategy in Guided-VAE and observe enhanced modeling and controlling capability over the vanilla VAE.
arXiv Detail & Related papers (2020-04-02T20:49:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.