ELENA: Epigenetic Learning through Evolved Neural Adaptation
- URL: http://arxiv.org/abs/2501.05735v1
- Date: Fri, 10 Jan 2025 06:04:32 GMT
- Title: ELENA: Epigenetic Learning through Evolved Neural Adaptation
- Authors: Boris Kriuk, Keti Sulamanidze, Fedor Kriuk,
- Abstract summary: We present ELENA, a new evolutionary framework that incorporates epigenetic mechanisms to enhance the adaptability of the core evolutionary approach.
Three epigenetic tags assist with guiding solution space search, facilitating a more intelligent hypothesis landscape exploration.
Experiments indicate that ELENA achieves competitive results, often surpassing state-of-the-art methods on network optimization tasks.
- Score: 0.0
- License:
- Abstract: Despite the success of metaheuristic algorithms in solving complex network optimization problems, they often struggle with adaptation, especially in dynamic or high-dimensional search spaces. Traditional approaches can become stuck in local optima, leading to inefficient exploration and suboptimal solutions. Most of the widely accepted advanced algorithms do well either on highly complex or smaller search spaces due to the lack of adaptation. To address these limitations, we present ELENA (Epigenetic Learning through Evolved Neural Adaptation), a new evolutionary framework that incorporates epigenetic mechanisms to enhance the adaptability of the core evolutionary approach. ELENA leverages compressed representation of learning parameters improved dynamically through epigenetic tags that serve as adaptive memory. Three epigenetic tags (mutation resistance, crossover affinity, and stability score) assist with guiding solution space search, facilitating a more intelligent hypothesis landscape exploration. To assess the framework performance, we conduct experiments on three critical network optimization problems: the Traveling Salesman Problem (TSP), the Vehicle Routing Problem (VRP), and the Maximum Clique Problem (MCP). Experiments indicate that ELENA achieves competitive results, often surpassing state-of-the-art methods on network optimization tasks.
Related papers
- Learning Evolution via Optimization Knowledge Adaptation [50.280704114978384]
Evolutionary algorithms (EAs) maintain populations through evolutionary operators to discover solutions for complex tasks.
We introduce an Optimization Knowledge Adaptation Evolutionary Model (OKAEM) to enhance its optimization capabilities.
OKAEM exploits prior knowledge for significant performance gains across various knowledge transfer settings.
It is capable of emulating principles of natural selection and genetic recombination.
arXiv Detail & Related papers (2025-01-04T05:35:21Z) - Enhancing CNN Classification with Lamarckian Memetic Algorithms and Local Search [0.0]
We propose a novel approach integrating a two-stage training technique with population-based optimization algorithms incorporating local search capabilities.
Our experiments demonstrate that the proposed method outperforms state-of-the-art gradient-based techniques.
arXiv Detail & Related papers (2024-10-26T17:31:15Z) - An Efficient Learning-based Solver Comparable to Metaheuristics for the
Capacitated Arc Routing Problem [67.92544792239086]
We introduce an NN-based solver to significantly narrow the gap with advanced metaheuristics.
First, we propose direction-aware facilitating attention model (DaAM) to incorporate directionality into the embedding process.
Second, we design a supervised reinforcement learning scheme that involves supervised pre-training to establish a robust initial policy.
arXiv Detail & Related papers (2024-03-11T02:17:42Z) - Incorporating Neuro-Inspired Adaptability for Continual Learning in
Artificial Intelligence [59.11038175596807]
Continual learning aims to empower artificial intelligence with strong adaptability to the real world.
Existing advances mainly focus on preserving memory stability to overcome catastrophic forgetting.
We propose a generic approach that appropriately attenuates old memories in parameter distributions to improve learning plasticity.
arXiv Detail & Related papers (2023-08-29T02:43:58Z) - Multiobjective Evolutionary Pruning of Deep Neural Networks with
Transfer Learning for improving their Performance and Robustness [15.29595828816055]
This work proposes MO-EvoPruneDeepTL, a multi-objective evolutionary pruning algorithm.
We use Transfer Learning to adapt the last layers of Deep Neural Networks, by replacing them with sparse layers evolved by a genetic algorithm.
Experiments show that our proposal achieves promising results in all the objectives, and direct relation are presented.
arXiv Detail & Related papers (2023-02-20T19:33:38Z) - Simulation-guided Beam Search for Neural Combinatorial Optimization [13.072343634530883]
We propose simulation-guided beam search (SGBS) for neural optimization problems.
We hybridize SGBS with efficient active search (EAS), where SGBS enhances the quality of solutions backpropagated in EAS.
We evaluate our methods on well-known CO benchmarks and show that SGBS significantly improves the quality of the solutions found under reasonable assumptions.
arXiv Detail & Related papers (2022-07-13T13:34:35Z) - SUPER-ADAM: Faster and Universal Framework of Adaptive Gradients [99.13839450032408]
It is desired to design a universal framework for adaptive algorithms to solve general problems.
In particular, our novel framework provides adaptive methods under non convergence support for setting.
arXiv Detail & Related papers (2021-06-15T15:16:28Z) - Meta-Learning with Neural Tangent Kernels [58.06951624702086]
We propose the first meta-learning paradigm in the Reproducing Kernel Hilbert Space (RKHS) induced by the meta-model's Neural Tangent Kernel (NTK)
Within this paradigm, we introduce two meta-learning algorithms, which no longer need a sub-optimal iterative inner-loop adaptation as in the MAML framework.
We achieve this goal by 1) replacing the adaptation with a fast-adaptive regularizer in the RKHS; and 2) solving the adaptation analytically based on the NTK theory.
arXiv Detail & Related papers (2021-02-07T20:53:23Z) - Evolutionary Gait Transfer of Multi-Legged Robots in Complex Terrains [14.787379075870383]
This paper proposes a transfer learning-based evolutionary framework for gait optimization, named Tr-GO.
The idea is to initialize a high-quality population by using the technique of transfer learning, so any kind of population-based optimization algorithms can be seamlessly integrated into this framework.
The experimental results show the effectiveness of the proposed framework for the gait optimization problem based on three multi-objective evolutionary algorithms.
arXiv Detail & Related papers (2020-12-24T16:41:36Z) - EOS: a Parallel, Self-Adaptive, Multi-Population Evolutionary Algorithm
for Constrained Global Optimization [68.8204255655161]
EOS is a global optimization algorithm for constrained and unconstrained problems of real-valued variables.
It implements a number of improvements to the well-known Differential Evolution (DE) algorithm.
Results prove that EOSis capable of achieving increased performance compared to state-of-the-art single-population self-adaptive DE algorithms.
arXiv Detail & Related papers (2020-07-09T10:19:22Z) - Evolving Inborn Knowledge For Fast Adaptation in Dynamic POMDP Problems [5.23587935428994]
In this paper, we exploit the highly adaptive nature of neuromodulated neural networks to evolve a controller that uses the latent space of an autoencoder in a POMDP.
The integration of inborn knowledge and online plasticity enabled fast adaptation and better performance in comparison to some non-evolutionary meta-reinforcement learning algorithms.
arXiv Detail & Related papers (2020-04-27T14:55:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.