D2RL: Deep Dense Architectures in Reinforcement Learning
- URL: http://arxiv.org/abs/2010.09163v2
- Date: Fri, 27 Nov 2020 20:22:38 GMT
- Title: D2RL: Deep Dense Architectures in Reinforcement Learning
- Authors: Samarth Sinha, Homanga Bharadhwaj, Aravind Srinivas, Animesh Garg
- Abstract summary: We take inspiration from successful architectural choices in computer vision and generative modelling.
We investigate the use of deeper networks and dense connections for reinforcement learning on a variety of simulated robotic learning benchmark environments.
- Score: 47.67475810050311
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: While improvements in deep learning architectures have played a crucial role
in improving the state of supervised and unsupervised learning in computer
vision and natural language processing, neural network architecture choices for
reinforcement learning remain relatively under-explored. We take inspiration
from successful architectural choices in computer vision and generative
modelling, and investigate the use of deeper networks and dense connections for
reinforcement learning on a variety of simulated robotic learning benchmark
environments. Our findings reveal that current methods benefit significantly
from dense connections and deeper networks, across a suite of manipulation and
locomotion tasks, for both proprioceptive and image-based observations. We hope
that our results can serve as a strong baseline and further motivate future
research into neural network architectures for reinforcement learning. The
project website with code is at this link
https://sites.google.com/view/d2rl/home.
Related papers
- DQNAS: Neural Architecture Search using Reinforcement Learning [6.33280703577189]
Convolutional Neural Networks have been used in a variety of image related applications.
In this paper, we propose an automated Neural Architecture Search framework, guided by the principles of Reinforcement Learning.
arXiv Detail & Related papers (2023-01-17T04:01:47Z) - The Neural Race Reduction: Dynamics of Abstraction in Gated Networks [12.130628846129973]
We introduce the Gated Deep Linear Network framework that schematizes how pathways of information flow impact learning dynamics.
We derive an exact reduction and, for certain cases, exact solutions to the dynamics of learning.
Our work gives rise to general hypotheses relating neural architecture to learning and provides a mathematical approach towards understanding the design of more complex architectures.
arXiv Detail & Related papers (2022-07-21T12:01:03Z) - Neural Architecture Search for Dense Prediction Tasks in Computer Vision [74.9839082859151]
Deep learning has led to a rising demand for neural network architecture engineering.
neural architecture search (NAS) aims at automatically designing neural network architectures in a data-driven manner rather than manually.
NAS has become applicable to a much wider range of problems in computer vision.
arXiv Detail & Related papers (2022-02-15T08:06:50Z) - Improving Sample Efficiency of Value Based Models Using Attention and
Vision Transformers [52.30336730712544]
We introduce a deep reinforcement learning architecture whose purpose is to increase sample efficiency without sacrificing performance.
We propose a visually attentive model that uses transformers to learn a self-attention mechanism on the feature maps of the state representation.
We demonstrate empirically that this architecture improves sample complexity for several Atari environments, while also achieving better performance in some of the games.
arXiv Detail & Related papers (2022-02-01T19:03:03Z) - Improving the sample-efficiency of neural architecture search with
reinforcement learning [0.0]
In this work, we would like to contribute to the area of Automated Machine Learning (AutoML)
Our focus is on one of the most promising research directions, reinforcement learning.
The validation accuracies of the child networks serve as a reward signal for training the controller.
We propose to modify this to a more modern and complex algorithm, PPO, which has demonstrated to be faster and more stable in other environments.
arXiv Detail & Related papers (2021-10-13T14:30:09Z) - Deep Reinforcement Learning in Computer Vision: A Comprehensive Survey [29.309914600633032]
Deep reinforcement learning augments the reinforcement learning framework and utilizes the powerful representation of deep neural networks.
Recent works have demonstrated the remarkable successes of deep reinforcement learning in various domains including finance, medicine, healthcare, video games, robotics, and computer vision.
arXiv Detail & Related papers (2021-08-25T23:01:48Z) - Efficient Neural Architecture Search with Performance Prediction [0.0]
We use a neural architecture search to find the best network architecture for the task at hand.
Existing NAS algorithms generally evaluate the fitness of a new architecture by fully training from scratch.
An end-to-end offline performance predictor is proposed to accelerate the evaluation of sampled architectures.
arXiv Detail & Related papers (2021-08-04T05:44:16Z) - A Design Space Study for LISTA and Beyond [79.76740811464597]
In recent years, great success has been witnessed in building problem-specific deep networks from unrolling iterative algorithms.
This paper revisits the role of unrolling as a design approach for deep networks, to what extent its resulting special architecture is superior, and can we find better?
Using LISTA for sparse recovery as a representative example, we conduct the first thorough design space study for the unrolled models.
arXiv Detail & Related papers (2021-04-08T23:01:52Z) - Firefly Neural Architecture Descent: a General Approach for Growing
Neural Networks [50.684661759340145]
Firefly neural architecture descent is a general framework for progressively and dynamically growing neural networks.
We show that firefly descent can flexibly grow networks both wider and deeper, and can be applied to learn accurate but resource-efficient neural architectures.
In particular, it learns networks that are smaller in size but have higher average accuracy than those learned by the state-of-the-art methods.
arXiv Detail & Related papers (2021-02-17T04:47:18Z) - NAS-DIP: Learning Deep Image Prior with Neural Architecture Search [65.79109790446257]
Recent work has shown that the structure of deep convolutional neural networks can be used as a structured image prior.
We propose to search for neural architectures that capture stronger image priors.
We search for an improved network by leveraging an existing neural architecture search algorithm.
arXiv Detail & Related papers (2020-08-26T17:59:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.