Zero-Shot Reinforcement Learning with Deep Attention Convolutional
Neural Networks
- URL: http://arxiv.org/abs/2001.00605v1
- Date: Thu, 2 Jan 2020 19:41:58 GMT
- Title: Zero-Shot Reinforcement Learning with Deep Attention Convolutional
Neural Networks
- Authors: Sahika Genc, Sunil Mallya, Sravan Bodapati, Tao Sun, Yunzhe Tao
- Abstract summary: We show that a deep attention convolutional neural network (DACNN) with specific visual sensor configuration performs as well as training on a dataset with high domain and parameter variation at lower computational complexity.
Our new architecture adapts perception with respect to the control objective, resulting in zero-shot learning without pre-training a perception network.
- Score: 12.282277258055542
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Simulation-to-simulation and simulation-to-real world transfer of neural
network models have been a difficult problem. To close the reality gap, prior
methods to simulation-to-real world transfer focused on domain adaptation,
decoupling perception and dynamics and solving each problem separately, and
randomization of agent parameters and environment conditions to expose the
learning agent to a variety of conditions. While these methods provide
acceptable performance, the computational complexity required to capture a
large variation of parameters for comprehensive scenarios on a given task such
as autonomous driving or robotic manipulation is high. Our key contribution is
to theoretically prove and empirically demonstrate that a deep attention
convolutional neural network (DACNN) with specific visual sensor configuration
performs as well as training on a dataset with high domain and parameter
variation at lower computational complexity. Specifically, the attention
network weights are learned through policy optimization to focus on local
dependencies that lead to optimal actions, and does not require tuning in
real-world for generalization. Our new architecture adapts perception with
respect to the control objective, resulting in zero-shot learning without
pre-training a perception network. To measure the impact of our new deep
network architecture on domain adaptation, we consider autonomous driving as a
use case. We perform an extensive set of experiments in
simulation-to-simulation and simulation-to-real scenarios to compare our
approach to several baselines including the current state-of-art models.
Related papers
- Task-Oriented Real-time Visual Inference for IoVT Systems: A Co-design Framework of Neural Networks and Edge Deployment [61.20689382879937]
Task-oriented edge computing addresses this by shifting data analysis to the edge.
Existing methods struggle to balance high model performance with low resource consumption.
We propose a novel co-design framework to optimize neural network architecture.
arXiv Detail & Related papers (2024-10-29T19:02:54Z) - Self Expanding Convolutional Neural Networks [1.4330085996657045]
We present a novel method for dynamically expanding Convolutional Neural Networks (CNNs) during training.
We employ a strategy where a single model is dynamically expanded, facilitating the extraction of checkpoints at various complexity levels.
arXiv Detail & Related papers (2024-01-11T06:22:40Z) - Predictive Experience Replay for Continual Visual Control and
Forecasting [62.06183102362871]
We present a new continual learning approach for visual dynamics modeling and explore its efficacy in visual control and forecasting.
We first propose the mixture world model that learns task-specific dynamics priors with a mixture of Gaussians, and then introduce a new training strategy to overcome catastrophic forgetting.
Our model remarkably outperforms the naive combinations of existing continual learning and visual RL algorithms on DeepMind Control and Meta-World benchmarks with continual visual control tasks.
arXiv Detail & Related papers (2023-03-12T05:08:03Z) - Towards Optimal Strategies for Training Self-Driving Perception Models
in Simulation [98.51313127382937]
We focus on the use of labels in the synthetic domain alone.
Our approach introduces both a way to learn neural-invariant representations and a theoretically inspired view on how to sample the data from the simulator.
We showcase our approach on the bird's-eye-view vehicle segmentation task with multi-sensor data.
arXiv Detail & Related papers (2021-11-15T18:37:43Z) - What Robot do I Need? Fast Co-Adaptation of Morphology and Control using
Graph Neural Networks [7.261920381796185]
A major challenge for the application of co-adaptation methods to the real world is the simulation-to-reality-gap.
This paper presents a new approach combining classic high-frequency deep neural networks with computational expensive Graph Neural Networks for the data-efficient co-adaptation of agents.
arXiv Detail & Related papers (2021-11-03T17:41:38Z) - Learning to Continuously Optimize Wireless Resource In Episodically
Dynamic Environment [55.91291559442884]
This work develops a methodology that enables data-driven methods to continuously learn and optimize in a dynamic environment.
We propose to build the notion of continual learning into the modeling process of learning wireless systems.
Our design is based on a novel min-max formulation which ensures certain fairness" across different data samples.
arXiv Detail & Related papers (2020-11-16T08:24:34Z) - Augmenting Differentiable Simulators with Neural Networks to Close the
Sim2Real Gap [15.1962264049463]
We present a differentiable simulation architecture for articulated rigid-body dynamics that enables the augmentation of analytical models with neural networks at any point of the computation.
arXiv Detail & Related papers (2020-07-12T17:27:11Z) - Deep learning of contagion dynamics on complex networks [0.0]
We propose a complementary approach based on deep learning to build effective models of contagion dynamics on networks.
By allowing simulations on arbitrary network structures, our approach makes it possible to explore the properties of the learned dynamics beyond the training data.
Our results demonstrate how deep learning offers a new and complementary perspective to build effective models of contagion dynamics on networks.
arXiv Detail & Related papers (2020-06-09T17:18:34Z) - From Simulation to Real World Maneuver Execution using Deep
Reinforcement Learning [69.23334811890919]
Deep Reinforcement Learning has proved to be able to solve many control tasks in different fields, but the behavior of these systems is not always as expected when deployed in real-world scenarios.
This is mainly due to the lack of domain adaptation between simulated and real-world data together with the absence of distinction between train and test datasets.
We present a system based on multiple environments in which agents are trained simultaneously, evaluating the behavior of the model in different scenarios.
arXiv Detail & Related papers (2020-05-13T14:22:20Z) - Sim-to-Real Transfer with Incremental Environment Complexity for
Reinforcement Learning of Depth-Based Robot Navigation [1.290382979353427]
Soft-Actor Critic (SAC) training strategy using incremental environment complexity is proposed to drastically reduce the need for additional training in the real world.
The application addressed is depth-based mapless navigation, where a mobile robot should reach a given waypoint in a cluttered environment with no prior mapping information.
arXiv Detail & Related papers (2020-04-30T10:47:02Z) - Dynamic Federated Learning [57.14673504239551]
Federated learning has emerged as an umbrella term for centralized coordination strategies in multi-agent environments.
We consider a federated learning model where at every iteration, a random subset of available agents perform local updates based on their data.
Under a non-stationary random walk model on the true minimizer for the aggregate optimization problem, we establish that the performance of the architecture is determined by three factors, namely, the data variability at each agent, the model variability across all agents, and a tracking term that is inversely proportional to the learning rate of the algorithm.
arXiv Detail & Related papers (2020-02-20T15:00:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.