Learning and evolution: factors influencing an effective combination
- URL: http://arxiv.org/abs/2306.11761v1
- Date: Tue, 20 Jun 2023 09:03:52 GMT
- Title: Learning and evolution: factors influencing an effective combination
- Authors: Paolo Pagliuca
- Abstract summary: The mutual relationship between evolution and learning is a controversial argument among the artificial intelligence and neuro-evolution communities.
The author investigates whether combining learning and evolution permits to find better solutions than those discovered by evolution alone.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The mutual relationship between evolution and learning is a controversial
argument among the artificial intelligence and neuro-evolution communities.
After more than three decades, there is still no common agreement on the
matter. In this paper the author investigates whether combining learning and
evolution permits to find better solutions than those discovered by evolution
alone. More specifically, the author presents a series of empirical studies
that highlight some specific conditions determining the success of such a
combination, like the introduction of noise during the learning and selection
processes. Results are obtained in two qualitatively different domains, where
agent/environment interactions are minimal or absent.
Related papers
- Evolving choice hysteresis in reinforcement learning: comparing the adaptive value of positivity bias and gradual perseveration [0.0]
We show that positivity bias is evolutionary stable in many situations, while the emergence of gradual perseveration is less systematic and robust.
Our results illustrate that biases can be adaptive and selected by evolution, in an environment-specific manner.
arXiv Detail & Related papers (2024-10-25T09:47:31Z) - Cognitive Evolutionary Learning to Select Feature Interactions for Recommender Systems [59.117526206317116]
We show that CELL can adaptively evolve into different models for different tasks and data.
Experiments on four real-world datasets demonstrate that CELL significantly outperforms state-of-the-art baselines.
arXiv Detail & Related papers (2024-05-29T02:35:23Z) - Case Study of Novelty, Complexity, and Adaptation in a Multicellular System [0.0]
We track the co-evolution of novelty, complexity, and adaptation in a case study from the DISHTINY simulation system.
We describe ten qualitatively distinct multicellular morphologies, several of which exhibit asymmetrical growth and distinct life stages.
Our case study suggests a loose -- sometimes divergent -- relationship can exist among novelty, complexity, and adaptation.
arXiv Detail & Related papers (2024-05-12T10:13:36Z) - DARLEI: Deep Accelerated Reinforcement Learning with Evolutionary
Intelligence [77.78795329701367]
We present DARLEI, a framework that combines evolutionary algorithms with parallelized reinforcement learning.
We characterize DARLEI's performance under various conditions, revealing factors impacting diversity of evolved morphologies.
We hope to extend DARLEI in future work to include interactions between diverse morphologies in richer environments.
arXiv Detail & Related papers (2023-12-08T16:51:10Z) - Role of Morphogenetic Competency on Evolution [0.0]
In Evolutionary Computation, the inverse relationship (impact of intelligence on evolution) is approached from the perspective of organism level behaviour.
We focus on the intelligence of a minimal model of a system navigating anatomical morphospace.
We evolve populations of artificial embryos using a standard genetic algorithm in silico.
arXiv Detail & Related papers (2023-10-13T11:58:18Z) - The Evolution theory of Learning: From Natural Selection to
Reinforcement Learning [0.0]
reinforcement learning is a powerful tool used in artificial intelligence to develop intelligent agents that learn from their environment.
In recent years, researchers have explored the connections between these two seemingly distinct fields, and have found compelling evidence that they are more closely related than previously thought.
This paper examines these connections and their implications, highlighting the potential for reinforcement learning principles to enhance our understanding of evolution and the role of feedback in evolutionary systems.
arXiv Detail & Related papers (2023-06-16T16:44:14Z) - Causal Deep Learning [77.49632479298745]
Causality has the potential to transform the way we solve real-world problems.
But causality often requires crucial assumptions which cannot be tested in practice.
We propose a new way of thinking about causality -- we call this causal deep learning.
arXiv Detail & Related papers (2023-03-03T19:19:18Z) - Anti-Retroactive Interference for Lifelong Learning [65.50683752919089]
We design a paradigm for lifelong learning based on meta-learning and associative mechanism of the brain.
It tackles the problem from two aspects: extracting knowledge and memorizing knowledge.
It is theoretically analyzed that the proposed learning paradigm can make the models of different tasks converge to the same optimum.
arXiv Detail & Related papers (2022-08-27T09:27:36Z) - The Introspective Agent: Interdependence of Strategy, Physiology, and
Sensing for Embodied Agents [51.94554095091305]
We argue for an introspective agent, which considers its own abilities in the context of its environment.
Just as in nature, we hope to reframe strategy as one tool, among many, to succeed in an environment.
arXiv Detail & Related papers (2022-01-02T20:14:01Z) - Embodied Intelligence via Learning and Evolution [92.26791530545479]
We show that environmental complexity fosters the evolution of morphological intelligence.
We also show that evolution rapidly selects morphologies that learn faster.
Our experiments suggest a mechanistic basis for both the Baldwin effect and the emergence of morphological intelligence.
arXiv Detail & Related papers (2021-02-03T18:58:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.