Evolutionary Dynamic Optimization and Machine Learning
- URL: http://arxiv.org/abs/2310.08748v3
- Date: Wed, 14 Feb 2024 12:28:18 GMT
- Title: Evolutionary Dynamic Optimization and Machine Learning
- Authors: Abdennour Boulesnane
- Abstract summary: Evolutionary Computation (EC) has emerged as a powerful field of Artificial Intelligence, inspired by nature's mechanisms of gradual development.
To overcome these limitations, researchers have integrated learning algorithms with evolutionary techniques.
This integration harnesses the valuable data generated by EC algorithms during iterative searches, providing insights into the search space and population dynamics.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Evolutionary Computation (EC) has emerged as a powerful field of Artificial
Intelligence, inspired by nature's mechanisms of gradual development. However,
EC approaches often face challenges such as stagnation, diversity loss,
computational complexity, population initialization, and premature convergence.
To overcome these limitations, researchers have integrated learning algorithms
with evolutionary techniques. This integration harnesses the valuable data
generated by EC algorithms during iterative searches, providing insights into
the search space and population dynamics. Similarly, the relationship between
evolutionary algorithms and Machine Learning (ML) is reciprocal, as EC methods
offer exceptional opportunities for optimizing complex ML tasks characterized
by noisy, inaccurate, and dynamic objective functions. These hybrid techniques,
known as Evolutionary Machine Learning (EML), have been applied at various
stages of the ML process. EC techniques play a vital role in tasks such as data
balancing, feature selection, and model training optimization. Moreover, ML
tasks often require dynamic optimization, for which Evolutionary Dynamic
Optimization (EDO) is valuable. This paper presents the first comprehensive
exploration of reciprocal integration between EDO and ML. The study aims to
stimulate interest in the evolutionary learning community and inspire
innovative contributions in this domain.
Related papers
- Evolutionary Large Language Model for Automated Feature Transformation [25.956740176321897]
We propose an evolutionary Large Language Model (LLM) framework for automated feature transformation.
This framework consists of two parts: 1) constructing a multi-population database through an RL data collector, and 2) utilizing the ability of Large Language Model (LLM) in sequence understanding.
We empirically demonstrate the effectiveness and generality of our proposed method.
arXiv Detail & Related papers (2024-05-25T12:27:21Z) - EmoDM: A Diffusion Model for Evolutionary Multi-objective Optimization [22.374325061635112]
This work proposes for the first time a diffusion model that can learn to perform evolutionary multi-objective search, called EmoDM.
EmoDM can generate a set of non-dominated solutions for a new MOP by means of its reverse diffusion without further evolutionary search.
Experimental results demonstrate the competitiveness of EmoDM in terms of both the search performance and computational efficiency.
arXiv Detail & Related papers (2024-01-29T07:41:44Z) - When large language models meet evolutionary algorithms [48.213640761641926]
Pre-trained large language models (LLMs) have powerful capabilities for generating creative natural text.
Evolutionary algorithms (EAs) can discover diverse solutions to complex real-world problems.
Motivated by the common collective and directionality of text generation and evolution, this paper illustrates the parallels between LLMs and EAs.
arXiv Detail & Related papers (2024-01-19T05:58:30Z) - Machine Learning Insides OptVerse AI Solver: Design Principles and
Applications [74.67495900436728]
We present a comprehensive study on the integration of machine learning (ML) techniques into Huawei Cloud's OptVerse AI solver.
We showcase our methods for generating complex SAT and MILP instances utilizing generative models that mirror multifaceted structures of real-world problem.
We detail the incorporation of state-of-the-art parameter tuning algorithms which markedly elevate solver performance.
arXiv Detail & Related papers (2024-01-11T15:02:15Z) - Reinforcement Learning-assisted Evolutionary Algorithm: A Survey and
Research Opportunities [63.258517066104446]
Reinforcement learning integrated as a component in the evolutionary algorithm has demonstrated superior performance in recent years.
We discuss the RL-EA integration method, the RL-assisted strategy adopted by RL-EA, and its applications according to the existing literature.
In the applications of RL-EA section, we also demonstrate the excellent performance of RL-EA on several benchmarks and a range of public datasets.
arXiv Detail & Related papers (2023-08-25T15:06:05Z) - Representation Learning with Multi-Step Inverse Kinematics: An Efficient
and Optimal Approach to Rich-Observation RL [106.82295532402335]
Existing reinforcement learning algorithms suffer from computational intractability, strong statistical assumptions, and suboptimal sample complexity.
We provide the first computationally efficient algorithm that attains rate-optimal sample complexity with respect to the desired accuracy level.
Our algorithm, MusIK, combines systematic exploration with representation learning based on multi-step inverse kinematics.
arXiv Detail & Related papers (2023-04-12T14:51:47Z) - A Survey on Learnable Evolutionary Algorithms for Scalable
Multiobjective Optimization [0.0]
Multiobjective evolutionary algorithms (MOEAs) have been adopted to solve various multiobjective optimization problems (MOPs)
However, these progressively improved MOEAs have not necessarily been equipped with sophisticatedly scalable and learnable problem-solving strategies.
Under different scenarios, it requires divergent thinking to design new powerful MOEAs for solving them effectively.
Research into learnable MOEAs that arm themselves with machine learning techniques for scaling-up MOPs has received extensive attention in the field of evolutionary computation.
arXiv Detail & Related papers (2022-06-23T08:16:01Z) - Efficient Model-Based Multi-Agent Mean-Field Reinforcement Learning [89.31889875864599]
We propose an efficient model-based reinforcement learning algorithm for learning in multi-agent systems.
Our main theoretical contributions are the first general regret bounds for model-based reinforcement learning for MFC.
We provide a practical parametrization of the core optimization problem.
arXiv Detail & Related papers (2021-07-08T18:01:02Z) - Meta-Learning with Neural Tangent Kernels [58.06951624702086]
We propose the first meta-learning paradigm in the Reproducing Kernel Hilbert Space (RKHS) induced by the meta-model's Neural Tangent Kernel (NTK)
Within this paradigm, we introduce two meta-learning algorithms, which no longer need a sub-optimal iterative inner-loop adaptation as in the MAML framework.
We achieve this goal by 1) replacing the adaptation with a fast-adaptive regularizer in the RKHS; and 2) solving the adaptation analytically based on the NTK theory.
arXiv Detail & Related papers (2021-02-07T20:53:23Z) - Evolving Inborn Knowledge For Fast Adaptation in Dynamic POMDP Problems [5.23587935428994]
In this paper, we exploit the highly adaptive nature of neuromodulated neural networks to evolve a controller that uses the latent space of an autoencoder in a POMDP.
The integration of inborn knowledge and online plasticity enabled fast adaptation and better performance in comparison to some non-evolutionary meta-reinforcement learning algorithms.
arXiv Detail & Related papers (2020-04-27T14:55:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.