Population-Based Evolution Optimizes a Meta-Learning Objective
- URL: http://arxiv.org/abs/2103.06435v1
- Date: Thu, 11 Mar 2021 03:45:43 GMT
- Title: Population-Based Evolution Optimizes a Meta-Learning Objective
- Authors: Kevin Frans, Olaf Witkowski
- Abstract summary: We propose that meta-learning and adaptive evolvability optimize for high performance after a set of learning iterations.
We demonstrate this claim with a simple evolutionary algorithm, Population-Based Meta Learning.
- Score: 0.6091702876917279
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Meta-learning models, or models that learn to learn, have been a long-desired
target for their ability to quickly solve new tasks. Traditional meta-learning
methods can require expensive inner and outer loops, thus there is demand for
algorithms that discover strong learners without explicitly searching for them.
We draw parallels to the study of evolvable genomes in evolutionary systems --
genomes with a strong capacity to adapt -- and propose that meta-learning and
adaptive evolvability optimize for the same objective: high performance after a
set of learning iterations. We argue that population-based evolutionary systems
with non-static fitness landscapes naturally bias towards high-evolvability
genomes, and therefore optimize for populations with strong learning ability.
We demonstrate this claim with a simple evolutionary algorithm,
Population-Based Meta Learning (PBML), that consistently discovers genomes
which display higher rates of improvement over generations, and can rapidly
adapt to solve sparse fitness and robotic control tasks.
Related papers
- Meta-Learning an Evolvable Developmental Encoding [7.479827648985631]
Generative models have shown promise in being learnable representations for black-box optimisation.
Here we present a system that can meta-learn such representation by optimising for a representation's ability to generate quality-diversity.
In more detail, we show our meta-learning approach can find one Neural Cellular Automata, in which cells can attend to different parts of a "DNA" string genome during development.
arXiv Detail & Related papers (2024-06-13T11:52:06Z) - A Survey on Self-Evolution of Large Language Models [116.54238664264928]
Large language models (LLMs) have significantly advanced in various fields and intelligent agent applications.
To address this issue, self-evolution approaches that enable LLMs to autonomously acquire, refine, and learn from experiences generated by the model itself are rapidly growing.
arXiv Detail & Related papers (2024-04-22T17:43:23Z) - Evolving Reservoirs for Meta Reinforcement Learning [1.6874375111244329]
We propose a computational model for studying a mechanism that can enable such a process.
At the evolutionary scale, we evolve reservoirs, a family of recurrent neural networks.
We employ these evolved reservoirs to facilitate the learning of a behavioral policy through Reinforcement Learning (RL)
Our results show that the evolution of reservoirs can improve the learning of diverse challenging tasks.
arXiv Detail & Related papers (2023-12-09T16:11:48Z) - DARLEI: Deep Accelerated Reinforcement Learning with Evolutionary
Intelligence [77.78795329701367]
We present DARLEI, a framework that combines evolutionary algorithms with parallelized reinforcement learning.
We characterize DARLEI's performance under various conditions, revealing factors impacting diversity of evolved morphologies.
We hope to extend DARLEI in future work to include interactions between diverse morphologies in richer environments.
arXiv Detail & Related papers (2023-12-08T16:51:10Z) - Evolutionary Dynamic Optimization and Machine Learning [0.0]
Evolutionary Computation (EC) has emerged as a powerful field of Artificial Intelligence, inspired by nature's mechanisms of gradual development.
To overcome these limitations, researchers have integrated learning algorithms with evolutionary techniques.
This integration harnesses the valuable data generated by EC algorithms during iterative searches, providing insights into the search space and population dynamics.
arXiv Detail & Related papers (2023-10-12T22:28:53Z) - Incorporating Neuro-Inspired Adaptability for Continual Learning in
Artificial Intelligence [59.11038175596807]
Continual learning aims to empower artificial intelligence with strong adaptability to the real world.
Existing advances mainly focus on preserving memory stability to overcome catastrophic forgetting.
We propose a generic approach that appropriately attenuates old memories in parameter distributions to improve learning plasticity.
arXiv Detail & Related papers (2023-08-29T02:43:58Z) - Phylogeny-informed fitness estimation [58.720142291102135]
We propose phylogeny-informed fitness estimation, which exploits a population's phylogeny to estimate fitness evaluations.
Our results indicate that phylogeny-informed fitness estimation can mitigate the drawbacks of down-sampled lexicase.
This work serves as an initial step toward improving evolutionary algorithms by exploiting runtime phylogenetic analysis.
arXiv Detail & Related papers (2023-06-06T19:05:01Z) - Discovering Evolution Strategies via Meta-Black-Box Optimization [23.956974467496345]
We propose to discover effective update rules for evolution strategies via meta-learning.
Our approach employs a search strategy parametrized by a self-attention-based architecture.
We show that it is possible to self-referentially train an evolution strategy from scratch, with the learned update rule used to drive the outer meta-learning loop.
arXiv Detail & Related papers (2022-11-21T08:48:46Z) - AdaLead: A simple and robust adaptive greedy search algorithm for
sequence design [55.41644538483948]
We develop an easy-to-directed, scalable, and robust evolutionary greedy algorithm (AdaLead)
AdaLead is a remarkably strong benchmark that out-competes more complex state of the art approaches in a variety of biologically motivated sequence design challenges.
arXiv Detail & Related papers (2020-10-05T16:40:38Z) - Evolving Inborn Knowledge For Fast Adaptation in Dynamic POMDP Problems [5.23587935428994]
In this paper, we exploit the highly adaptive nature of neuromodulated neural networks to evolve a controller that uses the latent space of an autoencoder in a POMDP.
The integration of inborn knowledge and online plasticity enabled fast adaptation and better performance in comparison to some non-evolutionary meta-reinforcement learning algorithms.
arXiv Detail & Related papers (2020-04-27T14:55:08Z) - Rapidly Adaptable Legged Robots via Evolutionary Meta-Learning [65.88200578485316]
We present a new meta-learning method that allows robots to quickly adapt to changes in dynamics.
Our method significantly improves adaptation to changes in dynamics in high noise settings.
We validate our approach on a quadruped robot that learns to walk while subject to changes in dynamics.
arXiv Detail & Related papers (2020-03-02T22:56:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.