Efficient RL via Disentangled Environment and Agent Representations
- URL: http://arxiv.org/abs/2309.02435v1
- Date: Tue, 5 Sep 2023 17:59:45 GMT
- Title: Efficient RL via Disentangled Environment and Agent Representations
- Authors: Kevin Gmelin, Shikhar Bahl, Russell Mendonca, Deepak Pathak
- Abstract summary: We propose an approach for learning such structured representations for RL algorithms, using visual knowledge of the agent, such as its shape or mask.
We show that our method, Structured Environment-Agent Representations, outperforms state-of-the-art model-free approaches over 18 different challenging visual simulation environments spanning 5 different robots.
- Score: 40.114817446130935
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Agents that are aware of the separation between themselves and their
environments can leverage this understanding to form effective representations
of visual input. We propose an approach for learning such structured
representations for RL algorithms, using visual knowledge of the agent, such as
its shape or mask, which is often inexpensive to obtain. This is incorporated
into the RL objective using a simple auxiliary loss. We show that our method,
Structured Environment-Agent Representations, outperforms state-of-the-art
model-free approaches over 18 different challenging visual simulation
environments spanning 5 different robots. Website at https://sear-rl.github.io/
Related papers
- DEAR: Disentangled Environment and Agent Representations for Reinforcement Learning without Reconstruction [4.813546138483559]
Reinforcement Learning (RL) algorithms can learn robotic control tasks from visual observations, but they often require a large amount of data.
In this paper, we explore how the agent's knowledge of its shape can improve the sample efficiency of visual RL methods.
We propose a novel method, Disentangled Environment and Agent Representations, that uses the segmentation mask of the agent as supervision.
arXiv Detail & Related papers (2024-06-30T09:15:21Z) - Empowering Embodied Visual Tracking with Visual Foundation Models and Offline RL [19.757030674041037]
Embodied visual tracking is a vital and challenging skill for embodied agents.
Existing methods suffer from inefficient training and poor generalization.
We propose a novel framework that combines visual foundation models and offline reinforcement learning.
arXiv Detail & Related papers (2024-04-15T15:12:53Z) - RePo: Resilient Model-Based Reinforcement Learning by Regularizing
Posterior Predictability [25.943330238941602]
We propose a visual model-based RL method that learns a latent representation resilient to spurious variations.
Our training objective encourages the representation to be maximally predictive of dynamics and reward.
Our effort is a step towards making model-based RL a practical and useful tool for dynamic, diverse domains.
arXiv Detail & Related papers (2023-08-31T18:43:04Z) - MOCA: Self-supervised Representation Learning by Predicting Masked Online Codebook Assignments [72.6405488990753]
Self-supervised learning can be used for mitigating the greedy needs of Vision Transformer networks.
We propose a single-stage and standalone method, MOCA, which unifies both desired properties.
We achieve new state-of-the-art results on low-shot settings and strong experimental results in various evaluation protocols.
arXiv Detail & Related papers (2023-07-18T15:46:20Z) - Agent-Controller Representations: Principled Offline RL with Rich
Exogenous Information [49.06422815335159]
Learning to control an agent from data collected offline is vital for real-world applications of reinforcement learning (RL)
This paper introduces offline RL benchmarks offering the ability to study this problem.
We find that contemporary representation learning techniques can fail on datasets where the noise is a complex and time dependent process.
arXiv Detail & Related papers (2022-10-31T22:12:48Z) - Mastering the Unsupervised Reinforcement Learning Benchmark from Pixels [112.63440666617494]
Reinforcement learning algorithms can succeed but require large amounts of interactions between the agent and the environment.
We propose a new method to solve it, using unsupervised model-based RL, for pre-training the agent.
We show robust performance on the Real-Word RL benchmark, hinting at resiliency to environment perturbations during adaptation.
arXiv Detail & Related papers (2022-09-24T14:22:29Z) - Semantic Tracklets: An Object-Centric Representation for Visual
Multi-Agent Reinforcement Learning [126.57680291438128]
We study whether scalability can be achieved via a disentangled representation.
We evaluate semantic tracklets' on the visual multi-agent particle environment (VMPE) and on the challenging visual multi-agent GFootball environment.
Notably, this method is the first to successfully learn a strategy for five players in the GFootball environment using only visual data.
arXiv Detail & Related papers (2021-08-06T22:19:09Z) - Forgetful Experience Replay in Hierarchical Reinforcement Learning from
Demonstrations [55.41644538483948]
In this paper, we propose a combination of approaches that allow the agent to use low-quality demonstrations in complex vision-based environments.
Our proposed goal-oriented structuring of replay buffer allows the agent to automatically highlight sub-goals for solving complex hierarchical tasks in demonstrations.
The solution based on our algorithm beats all the solutions for the famous MineRL competition and allows the agent to mine a diamond in the Minecraft environment.
arXiv Detail & Related papers (2020-06-17T15:38:40Z) - Acceleration of Actor-Critic Deep Reinforcement Learning for Visual
Grasping in Clutter by State Representation Learning Based on Disentanglement
of a Raw Input Image [4.970364068620608]
Actor-critic deep reinforcement learning (RL) methods typically perform very poorly when grasping diverse objects.
We employ state representation learning (SRL), where we encode essential information first for subsequent use in RL.
We found that preprocessing based on the disentanglement of a raw input image is the key to effectively capturing a compact representation.
arXiv Detail & Related papers (2020-02-27T03:58:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.