A Comprehensive Survey of Data Augmentation in Visual Reinforcement
Learning
- URL: http://arxiv.org/abs/2210.04561v1
- Date: Mon, 10 Oct 2022 11:01:57 GMT
- Title: A Comprehensive Survey of Data Augmentation in Visual Reinforcement
Learning
- Authors: Guozheng Ma, Zhen Wang, Zhecheng Yuan, Xueqian Wang, Bo Yuan, Dacheng
Tao
- Abstract summary: Data augmentation (DA) has become a widely used technique in visual RL for acquiring sample-efficient and generalizable policies.
We present a principled taxonomy of the existing augmentation techniques used in visual RL and conduct an in-depth discussion on how to better leverage augmented data.
As the first comprehensive survey of DA in visual RL, this work is expected to offer valuable guidance to this emerging field.
- Score: 68.63738119131888
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Visual reinforcement learning (RL), which makes decisions directly from
high-dimensional visual inputs, has demonstrated significant potential in
various domains. However, deploying visual RL techniques in the real world
remains challenging due to their low sample efficiency and large generalization
gaps. To tackle these obstacles, data augmentation (DA) has become a widely
used technique in visual RL for acquiring sample-efficient and generalizable
policies by diversifying the training data. This survey aims to provide a
timely and essential review of DA techniques in visual RL in recognition of the
thriving development in this field. In particular, we propose a unified
framework for analyzing visual RL and understanding the role of DA in it. We
then present a principled taxonomy of the existing augmentation techniques used
in visual RL and conduct an in-depth discussion on how to better leverage
augmented data in different scenarios. Moreover, we report a systematic
empirical evaluation of DA-based techniques in visual RL and conclude by
highlighting the directions for future research. As the first comprehensive
survey of DA in visual RL, this work is expected to offer valuable guidance to
this emerging field.
Related papers
- Generative AI for Deep Reinforcement Learning: Framework, Analysis, and Use Cases [60.30995339585003]
Deep reinforcement learning (DRL) has been widely applied across various fields and has achieved remarkable accomplishments.
DRL faces certain limitations, including low sample efficiency and poor generalization.
We present how to leverage generative AI (GAI) to address these issues and enhance the performance of DRL algorithms.
arXiv Detail & Related papers (2024-05-31T01:25:40Z) - Revisiting Data Augmentation in Deep Reinforcement Learning [3.660182910533372]
Various data augmentation techniques have been recently proposed in image-based deep reinforcement learning (DRL)
We analyze existing methods to better understand them and to uncover how they are connected.
This analysis suggests recommendations on how to exploit data augmentation in a more principled way.
arXiv Detail & Related papers (2024-02-19T14:42:10Z) - Learning Better with Less: Effective Augmentation for Sample-Efficient
Visual Reinforcement Learning [57.83232242068982]
Data augmentation (DA) is a crucial technique for enhancing the sample efficiency of visual reinforcement learning (RL) algorithms.
It remains unclear which attributes of DA account for its effectiveness in achieving sample-efficient visual RL.
This work conducts comprehensive experiments to assess the impact of DA's attributes on its efficacy.
arXiv Detail & Related papers (2023-05-25T15:46:20Z) - Unsupervised Representation Learning in Deep Reinforcement Learning: A Review [1.2016264781280588]
This review addresses the problem of learning abstract representations of the measurement data in the context of Deep Reinforcement Learning (DRL)
This review provides a comprehensive and complete overview of unsupervised representation learning in DRL by describing the main Deep Learning tools used for learning representations of the world.
arXiv Detail & Related papers (2022-08-27T09:38:56Z) - Challenges and Opportunities in Offline Reinforcement Learning from
Visual Observations [58.758928936316785]
offline reinforcement learning from visual observations with continuous action spaces remains under-explored.
We show that modifications to two popular vision-based online reinforcement learning algorithms suffice to outperform existing offline RL methods.
arXiv Detail & Related papers (2022-06-09T22:08:47Z) - Seeking Visual Discomfort: Curiosity-driven Representations for
Reinforcement Learning [12.829056201510994]
We present an approach to improve sample diversity for state representation learning.
Our proposed approach boosts the visitation of problematic states, improves the learned state representation, and outperforms the baselines for all tested environments.
arXiv Detail & Related papers (2021-10-02T11:15:04Z) - Making Curiosity Explicit in Vision-based RL [12.829056201510994]
Vision-based reinforcement learning (RL) is a promising technique to solve control tasks involving images as the main observation.
State-of-the-art RL algorithms still struggle in terms of sample efficiency.
We present an approach to improve the sample diversity.
arXiv Detail & Related papers (2021-09-28T09:50:37Z) - Offline Reinforcement Learning from Images with Latent Space Models [60.69745540036375]
offline reinforcement learning (RL) refers to the problem of learning policies from a static dataset of environment interactions.
We build on recent advances in model-based algorithms for offline RL, and extend them to high-dimensional visual observation spaces.
Our approach is both tractable in practice and corresponds to maximizing a lower bound of the ELBO in the unknown POMDP.
arXiv Detail & Related papers (2020-12-21T18:28:17Z) - Dynamics Generalization via Information Bottleneck in Deep Reinforcement
Learning [90.93035276307239]
We propose an information theoretic regularization objective and an annealing-based optimization method to achieve better generalization ability in RL agents.
We demonstrate the extreme generalization benefits of our approach in different domains ranging from maze navigation to robotic tasks.
This work provides a principled way to improve generalization in RL by gradually removing information that is redundant for task-solving.
arXiv Detail & Related papers (2020-08-03T02:24:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.