Learning Evolution via Optimization Knowledge Adaptation
- URL: http://arxiv.org/abs/2501.02200v1
- Date: Sat, 04 Jan 2025 05:35:21 GMT
- Title: Learning Evolution via Optimization Knowledge Adaptation
- Authors: Chao Wang, Licheng Jiao, Jiaxuan Zhao, Lingling Li, Fang Liu, Shuyuan Yang,
- Abstract summary: Evolutionary algorithms (EAs) maintain populations through evolutionary operators to discover solutions for complex tasks.
We introduce an Optimization Knowledge Adaptation Evolutionary Model (OKAEM) to enhance its optimization capabilities.
OKAEM exploits prior knowledge for significant performance gains across various knowledge transfer settings.
It is capable of emulating principles of natural selection and genetic recombination.
- Score: 50.280704114978384
- License:
- Abstract: Evolutionary algorithms (EAs) maintain populations through evolutionary operators to discover diverse solutions for complex tasks while gathering valuable knowledge, such as historical population data and fitness evaluations. However, traditional EAs face challenges in dynamically adapting to expanding knowledge bases, hindering the efficient exploitation of accumulated information and limiting adaptability to new situations. To address these issues, we introduce an Optimization Knowledge Adaptation Evolutionary Model (OKAEM), which features dynamic parameter adjustment using accumulated knowledge to enhance its optimization capabilities. OKAEM employs attention mechanisms to model the interactions among individuals, fitness landscapes, and genetic components separately, thereby parameterizing the evolutionary operators of selection, crossover, and mutation. These powerful learnable operators enable OKAEM to benefit from pre-learned extensive prior knowledge and self-tune with real-time evolutionary insights. Experimental results demonstrate that OKAEM: 1) exploits prior knowledge for significant performance gains across various knowledge transfer settings; 2) achieves competitive performance through self-tuning alone, even without prior knowledge; 3) outperforms state-of-the-art black-box baselines in a vision-language model tuning case; 4) can improve its optimization capabilities with growing knowledge; 5) is capable of emulating principles of natural selection and genetic recombination.
Related papers
- ELENA: Epigenetic Learning through Evolved Neural Adaptation [0.0]
We present ELENA, a new evolutionary framework that incorporates epigenetic mechanisms to enhance the adaptability of the core evolutionary approach.
Three epigenetic tags assist with guiding solution space search, facilitating a more intelligent hypothesis landscape exploration.
Experiments indicate that ELENA achieves competitive results, often surpassing state-of-the-art methods on network optimization tasks.
arXiv Detail & Related papers (2025-01-10T06:04:32Z) - EXAdam: The Power of Adaptive Cross-Moments [0.0]
This paper introduces EXAdam, a novel optimization algorithm that builds upon the widely-used AdamAdam algorithm.
EXAdam incorporates three key enhancements: (1) new debiasing terms for improved moment estimation, (2) a gradient-based acceleration mechanism, and (3) a dynamic step size formula.
Empirical evaluations demonstrate EXAdam's superiority over Adam, achieving 48.07% faster convergence and yielding improvements of 4.6%, 4.13%, and 2.39% in training, validation, and testing accuracies.
arXiv Detail & Related papers (2024-12-29T00:11:54Z) - KBAlign: Efficient Self Adaptation on Specific Knowledge Bases [73.34893326181046]
Large language models (LLMs) usually rely on retrieval-augmented generation to exploit knowledge materials in an instant manner.
We propose KBAlign, an approach designed for efficient adaptation to downstream tasks involving knowledge bases.
Our method utilizes iterative training with self-annotated data such as Q&A pairs and revision suggestions, enabling the model to grasp the knowledge content efficiently.
arXiv Detail & Related papers (2024-11-22T08:21:03Z) - Auto-selected Knowledge Adapters for Lifelong Person Re-identification [54.42307214981537]
Lifelong Person Re-Identification requires systems to continually learn from non-overlapping datasets across different times and locations.
Existing approaches, either rehearsal-free or rehearsal-based, still suffer from the problem of catastrophic forgetting.
We introduce a novel framework AdalReID, that adopts knowledge adapters and a parameter-free auto-selection mechanism for lifelong learning.
arXiv Detail & Related papers (2024-05-29T11:42:02Z) - Cognitive Evolutionary Learning to Select Feature Interactions for Recommender Systems [59.117526206317116]
We show that CELL can adaptively evolve into different models for different tasks and data.
Experiments on four real-world datasets demonstrate that CELL significantly outperforms state-of-the-art baselines.
arXiv Detail & Related papers (2024-05-29T02:35:23Z) - Evolutionary Reinforcement Learning: A Systematic Review and Future
Directions [18.631418642768132]
Evolutionary Reinforcement Learning (EvoRL) is a solution to the limitations of reinforcement learning and evolutionary algorithms (EAs) in complex problem-solving.
EvoRL integrates EAs and reinforcement learning, presenting a promising avenue for training intelligent agents.
This systematic review provides insights into the current state of EvoRL and offers a guide for advancing its capabilities in the ever-evolving landscape of artificial intelligence.
arXiv Detail & Related papers (2024-02-20T02:07:57Z) - Discovering Attention-Based Genetic Algorithms via Meta-Black-Box
Optimization [13.131971623143622]
We discover entirely new genetic algorithms in a data-driven fashion.
We parametrize selection and mutation rate adaptation as cross- and self-attention modules.
The learned algorithm can be applied to previously unseen optimization problems, search dimensions & evaluation budgets.
arXiv Detail & Related papers (2023-04-08T12:14:15Z) - Learning to Optimize for Reinforcement Learning [58.01132862590378]
Reinforcement learning (RL) is essentially different from supervised learning, and in practice, these learneds do not work well even in simple RL tasks.
Agent-gradient distribution is non-independent and identically distributed, leading to inefficient meta-training.
We show that, although only trained in toy tasks, our learned can generalize unseen complex tasks in Brax.
arXiv Detail & Related papers (2023-02-03T00:11:02Z) - Population-Based Evolution Optimizes a Meta-Learning Objective [0.6091702876917279]
We propose that meta-learning and adaptive evolvability optimize for high performance after a set of learning iterations.
We demonstrate this claim with a simple evolutionary algorithm, Population-Based Meta Learning.
arXiv Detail & Related papers (2021-03-11T03:45:43Z) - Emergent Hand Morphology and Control from Optimizing Robust Grasps of
Diverse Objects [63.89096733478149]
We introduce a data-driven approach where effective hand designs naturally emerge for the purpose of grasping diverse objects.
We develop a novel Bayesian Optimization algorithm that efficiently co-designs the morphology and grasping skills jointly.
We demonstrate the effectiveness of our approach in discovering robust and cost-efficient hand morphologies for grasping novel objects.
arXiv Detail & Related papers (2020-12-22T17:52:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.