Optimizing the Neural Architecture of Reinforcement Learning Agents
- URL: http://arxiv.org/abs/2011.14632v3
- Date: Wed, 28 Apr 2021 10:21:47 GMT
- Title: Optimizing the Neural Architecture of Reinforcement Learning Agents
- Authors: N. Mazyavkina, S. Moustafa, I. Trofimov, E. Burnaev
- Abstract summary: We study recently proposed neural architecture search (NAS) methods for optimizing the architecture of RL agents.
We carry out experiments on the Atari benchmark and conclude that modern NAS methods find architectures of RL agents outperforming a manually selected one.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Reinforcement learning (RL) enjoyed significant progress over the last years.
One of the most important steps forward was the wide application of neural
networks. However, architectures of these neural networks are typically
constructed manually. In this work, we study recently proposed neural
architecture search (NAS) methods for optimizing the architecture of RL agents.
We carry out experiments on the Atari benchmark and conclude that modern NAS
methods find architectures of RL agents outperforming a manually selected one.
Related papers
- Robust Evolutionary Multi-Objective Network Architecture Search for Reinforcement Learning (EMNAS-RL) [43.108040967674185]
This paper introduces Evolutionary Multi-Objective Network Architecture Search (EMNAS) for the first time to optimize neural network architectures in large-scale Reinforcement Learning for Autonomous Driving (AD)<n> EMNAS uses genetic algorithms to automate network design, tailored to enhance rewards and reduce model size without compromising performance.
arXiv Detail & Related papers (2025-06-10T07:52:35Z) - NADER: Neural Architecture Design via Multi-Agent Collaboration [37.48197934228379]
We introduce NADER, a novel framework that formulates neural architecture design (NAD) as a multi-agent collaboration problem.
We propose the Reflector, which effectively learns from immediate feedback and long-term experiences.
Unlike previous LLM-based methods that use code to represent neural architectures, we utilize a graph-based representation.
arXiv Detail & Related papers (2024-12-26T13:07:03Z) - Proxyless Neural Architecture Adaptation for Supervised Learning and
Self-Supervised Learning [3.766702945560518]
We propose proxyless neural architecture adaptation that is reproducible and efficient.
Our method can be applied to both supervised learning and self-supervised learning.
arXiv Detail & Related papers (2022-05-15T02:49:48Z) - Neural Architecture Search for Speech Emotion Recognition [72.1966266171951]
We propose to apply neural architecture search (NAS) techniques to automatically configure the SER models.
We show that NAS can improve SER performance (54.89% to 56.28%) while maintaining model parameter sizes.
arXiv Detail & Related papers (2022-03-31T10:16:10Z) - Neural Architecture Search for Spiking Neural Networks [10.303676184878896]
Spiking Neural Networks (SNNs) have gained huge attention as a potential energy-efficient alternative to conventional Artificial Neural Networks (ANNs)
Most prior SNN methods use ANN-like architectures, which could provide sub-optimal performance for temporal sequence processing of binary information in SNNs.
We introduce a novel Neural Architecture Search (NAS) approach for finding better SNN architectures.
arXiv Detail & Related papers (2022-01-23T16:34:27Z) - Conceptual Expansion Neural Architecture Search (CENAS) [1.3464152928754485]
We present an approach called Conceptual Expansion Neural Architecture Search (CENAS)
It combines a sample-efficient, computational creativity-inspired transfer learning approach with neural architecture search.
It finds models faster than naive architecture search via transferring existing weights to approximate the parameters of the new model.
arXiv Detail & Related papers (2021-10-07T02:29:26Z) - Pretraining Neural Architecture Search Controllers with Locality-based
Self-Supervised Learning [0.0]
We propose a pretraining scheme that can be applied to controller-based NAS.
Our method, locality-based self-supervised classification task, leverages the structural similarity of network architectures to obtain good architecture representations.
arXiv Detail & Related papers (2021-03-15T06:30:36Z) - NAS-Navigator: Visual Steering for Explainable One-Shot Deep Neural
Network Synthesis [53.106414896248246]
We present a framework that allows analysts to effectively build the solution sub-graph space and guide the network search by injecting their domain knowledge.
Applying this technique in an iterative manner allows analysts to converge to the best performing neural network architecture for a given application.
arXiv Detail & Related papers (2020-09-28T01:48:45Z) - NAS-DIP: Learning Deep Image Prior with Neural Architecture Search [65.79109790446257]
Recent work has shown that the structure of deep convolutional neural networks can be used as a structured image prior.
We propose to search for neural architectures that capture stronger image priors.
We search for an improved network by leveraging an existing neural architecture search algorithm.
arXiv Detail & Related papers (2020-08-26T17:59:36Z) - Off-Policy Reinforcement Learning for Efficient and Effective GAN
Architecture Search [50.40004966087121]
We introduce a new reinforcement learning based neural architecture search (NAS) methodology for generative adversarial network (GAN) architecture search.
The key idea is to formulate the GAN architecture search problem as a Markov decision process (MDP) for smoother architecture sampling.
We exploit an off-policy GAN architecture search algorithm that makes efficient use of the samples generated by previous policies.
arXiv Detail & Related papers (2020-07-17T18:29:17Z) - A Semi-Supervised Assessor of Neural Architectures [157.76189339451565]
We employ an auto-encoder to discover meaningful representations of neural architectures.
A graph convolutional neural network is introduced to predict the performance of architectures.
arXiv Detail & Related papers (2020-05-14T09:02:33Z) - Stage-Wise Neural Architecture Search [65.03109178056937]
Modern convolutional networks such as ResNet and NASNet have achieved state-of-the-art results in many computer vision applications.
These networks consist of stages, which are sets of layers that operate on representations in the same resolution.
It has been demonstrated that increasing the number of layers in each stage improves the prediction ability of the network.
However, the resulting architecture becomes computationally expensive in terms of floating point operations, memory requirements and inference time.
arXiv Detail & Related papers (2020-04-23T14:16:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.