Co-design of Embodied Neural Intelligence via Constrained Evolution
- URL: http://arxiv.org/abs/2205.10688v1
- Date: Sat, 21 May 2022 22:44:12 GMT
- Title: Co-design of Embodied Neural Intelligence via Constrained Evolution
- Authors: Zhiquan Wang, Bedrich Benes, Ahmed H. Qureshi, Christos Mousas
- Abstract summary: We introduce a novel co-design method for autonomous moving agents' shape attributes and locomotion.
Our main inspiration comes from evolution, which has led to wide variability and adaptation in Nature.
Our results show that even with only 10% of changes, the overall performance of the evolved agents improves 50%.
- Score: 8.350757829136315
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce a novel co-design method for autonomous moving agents' shape
attributes and locomotion by combining deep reinforcement learning and
evolution with user control. Our main inspiration comes from evolution, which
has led to wide variability and adaptation in Nature and has the potential to
significantly improve design and behavior simultaneously. Our method takes an
input agent with optional simple constraints such as leg parts that should not
evolve or allowed ranges of changes. It uses physics-based simulation to
determine its locomotion and finds a behavior policy for the input design,
later used as a baseline for comparison. The agent is then randomly modified
within the allowed ranges creating a new generation of several hundred agents.
The generation is trained by transferring the previous policy, which
significantly speeds up the training. The best-performing agents are selected,
and a new generation is formed using their crossover and mutations. The next
generations are then trained until satisfactory results are reached. We show a
wide variety of evolved agents, and our results show that even with only 10% of
changes, the overall performance of the evolved agents improves 50%. If more
significant changes to the initial design are allowed, our experiments'
performance improves even more to 150%. Contrary to related work, our co-design
works on a single GPU and provides satisfactory results by training thousands
of agents within one hour.
Related papers
- AgentGym: Evolving Large Language Model-based Agents across Diverse Environments [116.97648507802926]
Large language models (LLMs) are considered a promising foundation to build such agents.
We take the first step towards building generally-capable LLM-based agents with self-evolution ability.
We propose AgentGym, a new framework featuring a variety of environments and tasks for broad, real-time, uni-format, and concurrent agent exploration.
arXiv Detail & Related papers (2024-06-06T15:15:41Z) - Evolution and learning in differentiable robots [0.0]
We use differentiable simulations to rapidly and simultaneously optimize individual neural control of behavior across a large population of candidate body plans.
Non-differentiable changes to the mechanical structure of each robot in the population were applied by a genetic algorithm in an outer loop of search.
One of the highly differentiable morphologies discovered in simulation was realized as a physical robot and shown to retain its optimized behavior.
arXiv Detail & Related papers (2024-05-23T15:45:43Z) - DARLEI: Deep Accelerated Reinforcement Learning with Evolutionary
Intelligence [77.78795329701367]
We present DARLEI, a framework that combines evolutionary algorithms with parallelized reinforcement learning.
We characterize DARLEI's performance under various conditions, revealing factors impacting diversity of evolved morphologies.
We hope to extend DARLEI in future work to include interactions between diverse morphologies in richer environments.
arXiv Detail & Related papers (2023-12-08T16:51:10Z) - Emergent Agentic Transformer from Chain of Hindsight Experience [96.56164427726203]
We show that a simple transformer-based model performs competitively with both temporal-difference and imitation-learning-based approaches.
This is the first time that a simple transformer-based model performs competitively with both temporal-difference and imitation-learning-based approaches.
arXiv Detail & Related papers (2023-05-26T00:43:02Z) - Learning to Generate Levels by Imitating Evolution [7.110423254122942]
We introduce a new type of iterative level generator using machine learning.
We train a model to imitate the evolutionary process and use the model to generate levels.
This trained model is able to modify noisy levels sequentially to create better levels without the need for a fitness function.
arXiv Detail & Related papers (2022-06-11T10:44:57Z) - The Introspective Agent: Interdependence of Strategy, Physiology, and
Sensing for Embodied Agents [51.94554095091305]
We argue for an introspective agent, which considers its own abilities in the context of its environment.
Just as in nature, we hope to reframe strategy as one tool, among many, to succeed in an environment.
arXiv Detail & Related papers (2022-01-02T20:14:01Z) - Epigenetic evolution of deep convolutional models [81.21462458089142]
We build upon a previously proposed neuroevolution framework to evolve deep convolutional models.
We propose a convolutional layer layout which allows kernels of different shapes and sizes to coexist within the same layer.
The proposed layout enables the size and shape of individual kernels within a convolutional layer to be evolved with a corresponding new mutation operator.
arXiv Detail & Related papers (2021-04-12T12:45:16Z) - Task-Agnostic Morphology Evolution [94.97384298872286]
Current approaches that co-adapt morphology and behavior use a specific task's reward as a signal for morphology optimization.
This often requires expensive policy optimization and results in task-dependent morphologies that are not built to generalize.
We propose a new approach, Task-Agnostic Morphology Evolution (TAME), to alleviate both of these issues.
arXiv Detail & Related papers (2021-02-25T18:59:21Z) - Lineage Evolution Reinforcement Learning [15.469857142001482]
Lineage evolution reinforcement learning is a derivative algorithm which accords with the general agent population learning system.
Our experiments show that the idea of evolution with lineage improves the performance of original reinforcement learning algorithm in some games in Atari 2600.
arXiv Detail & Related papers (2020-09-26T11:58:16Z) - Mimicking Evolution with Reinforcement Learning [10.35437633064506]
We argue that the path to developing artificial human-like-intelligence will pass through mimicking the evolutionary process in a nature-like simulation.
This work proposes Evolution via Evolutionary Reward (EvER) that allows learning to single-handedly drive the search for policies with increasingly evolutionary fitness.
arXiv Detail & Related papers (2020-03-31T18:16:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.