Generative Adversarial Neuroevolution for Control Behaviour Imitation
- URL: http://arxiv.org/abs/2304.12432v1
- Date: Mon, 3 Apr 2023 16:33:22 GMT
- Title: Generative Adversarial Neuroevolution for Control Behaviour Imitation
- Authors: Maximilien Le Clei, Pierre Bellec
- Abstract summary: We propose to explore whether deep neuroevolution can be used for behaviour imitation on popular simulation environments.
We introduce a simple co-evolutionary adversarial generation framework, and evaluate its capabilities by evolving standard deep recurrent networks.
Across all tasks, we find the final elite actor agents capable of achieving scores as high as those obtained by the pre-trained agents.
- Score: 3.04585143845864
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: There is a recent surge in interest for imitation learning, with large human
video-game and robotic manipulation datasets being used to train agents on very
complex tasks. While deep neuroevolution has recently been shown to match the
performance of gradient-based techniques on various reinforcement learning
problems, the application of deep neuroevolution techniques to imitation
learning remains relatively unexplored. In this work, we propose to explore
whether deep neuroevolution can be used for behaviour imitation on popular
simulation environments. We introduce a simple co-evolutionary adversarial
generation framework, and evaluate its capabilities by evolving standard deep
recurrent networks to imitate state-of-the-art pre-trained agents on 8 OpenAI
Gym state-based control tasks. Across all tasks, we find the final elite actor
agents capable of achieving scores as high as those obtained by the pre-trained
agents, all the while closely following their score trajectories. Our results
suggest that neuroevolution could be a valuable addition to deep learning
techniques to produce accurate emulation of behavioural agents. We believe that
the generality and simplicity of our approach opens avenues for imitating
increasingly complex behaviours in increasingly complex settings, e.g. human
behaviour in real-world settings. We provide our source code, model checkpoints
and results at github.com/MaximilienLC/gane.
Related papers
- Life, uh, Finds a Way: Systematic Neural Search [2.163881720692685]
We tackle the challenge of rapidly adapting an agent's behavior to solve continuous problems in settings.
Instead of focusing on deep reinforcement learning, we propose viewing behavior as the physical manifestation of a search procedure.
We describe an algorithm that implicitly enumerates behaviors by regulating the tight feedback loop between execution of behaviors and mutation of the graph.
arXiv Detail & Related papers (2024-10-02T09:06:54Z) - DARLEI: Deep Accelerated Reinforcement Learning with Evolutionary
Intelligence [77.78795329701367]
We present DARLEI, a framework that combines evolutionary algorithms with parallelized reinforcement learning.
We characterize DARLEI's performance under various conditions, revealing factors impacting diversity of evolved morphologies.
We hope to extend DARLEI in future work to include interactions between diverse morphologies in richer environments.
arXiv Detail & Related papers (2023-12-08T16:51:10Z) - Incremental procedural and sensorimotor learning in cognitive humanoid
robots [52.77024349608834]
This work presents a cognitive agent that can learn procedures incrementally.
We show the cognitive functions required in each substage and how adding new functions helps address tasks previously unsolved by the agent.
Results show that this approach is capable of solving complex tasks incrementally.
arXiv Detail & Related papers (2023-04-30T22:51:31Z) - Towards the Neuroevolution of Low-level Artificial General Intelligence [5.2611228017034435]
We argue that the search for Artificial General Intelligence (AGI) should start from a much lower level than human-level intelligence.
Our hypothesis is that learning occurs through sensory feedback when an agent acts in an environment.
We evaluate a method to evolve a biologically-inspired artificial neural network that learns from environment reactions.
arXiv Detail & Related papers (2022-07-27T15:30:50Z) - Probe-Based Interventions for Modifying Agent Behavior [4.324022085722613]
We develop a method for updating representations in pre-trained neural nets according to externally-specified properties.
In experiments, we show how our method may be used to improve human-agent team performance for a variety of neural networks.
arXiv Detail & Related papers (2022-01-26T19:14:00Z) - Overcoming the Domain Gap in Contrastive Learning of Neural Action
Representations [60.47807856873544]
A fundamental goal in neuroscience is to understand the relationship between neural activity and behavior.
We generated a new multimodal dataset consisting of the spontaneous behaviors generated by fruit flies.
This dataset and our new set of augmentations promise to accelerate the application of self-supervised learning methods in neuroscience.
arXiv Detail & Related papers (2021-11-29T15:27:51Z) - Neuronal Learning Analysis using Cycle-Consistent Adversarial Networks [4.874780144224057]
We use a variant of deep generative models called - CycleGAN, to learn the unknown mapping between pre- and post-learning neural activities.
We develop an end-to-end pipeline to preprocess, train and evaluate calcium fluorescence signals, and a procedure to interpret the resulting deep learning models.
arXiv Detail & Related papers (2021-11-25T13:24:19Z) - Backprop-Free Reinforcement Learning with Active Neural Generative
Coding [84.11376568625353]
We propose a computational framework for learning action-driven generative models without backpropagation of errors (backprop) in dynamic environments.
We develop an intelligent agent that operates even with sparse rewards, drawing inspiration from the cognitive theory of planning as inference.
The robust performance of our agent offers promising evidence that a backprop-free approach for neural inference and learning can drive goal-directed behavior.
arXiv Detail & Related papers (2021-07-10T19:02:27Z) - Deep Imitation Learning for Bimanual Robotic Manipulation [70.56142804957187]
We present a deep imitation learning framework for robotic bimanual manipulation.
A core challenge is to generalize the manipulation skills to objects in different locations.
We propose to (i) decompose the multi-modal dynamics into elemental movement primitives, (ii) parameterize each primitive using a recurrent graph neural network to capture interactions, and (iii) integrate a high-level planner that composes primitives sequentially and a low-level controller to combine primitive dynamics and inverse kinematics control.
arXiv Detail & Related papers (2020-10-11T01:40:03Z) - AutoML-Zero: Evolving Machine Learning Algorithms From Scratch [76.83052807776276]
We show that it is possible to automatically discover complete machine learning algorithms just using basic mathematical operations as building blocks.
We demonstrate this by introducing a novel framework that significantly reduces human bias through a generic search space.
We believe these preliminary successes in discovering machine learning algorithms from scratch indicate a promising new direction in the field.
arXiv Detail & Related papers (2020-03-06T19:00:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.