A Survey on Learnable Evolutionary Algorithms for Scalable
Multiobjective Optimization
- URL: http://arxiv.org/abs/2206.11526v1
- Date: Thu, 23 Jun 2022 08:16:01 GMT
- Title: A Survey on Learnable Evolutionary Algorithms for Scalable
Multiobjective Optimization
- Authors: Songbai Liu
- Abstract summary: Multiobjective evolutionary algorithms (MOEAs) have been adopted to solve various multiobjective optimization problems (MOPs)
However, these progressively improved MOEAs have not necessarily been equipped with sophisticatedly scalable and learnable problem-solving strategies.
Under different scenarios, it requires divergent thinking to design new powerful MOEAs for solving them effectively.
Research into learnable MOEAs that arm themselves with machine learning techniques for scaling-up MOPs has received extensive attention in the field of evolutionary computation.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent decades have witnessed remarkable advancements in multiobjective
evolutionary algorithms (MOEAs) that have been adopted to solve various
multiobjective optimization problems (MOPs). However, these progressively
improved MOEAs have not necessarily been equipped with sophisticatedly scalable
and learnable problem-solving strategies that are able to cope with new and
grand challenges brought by the scaling-up MOPs with continuously increasing
complexity or scale from diverse aspects, mainly including expensive function
evaluations, many objectives, large-scale search space, time-varying
environments, and multitask. Under different scenarios, it requires divergent
thinking to design new powerful MOEAs for solving them effectively. In this
context, research into learnable MOEAs that arm themselves with machine
learning techniques for scaling-up MOPs has received extensive attention in the
field of evolutionary computation. In this paper, we begin with a taxonomy of
scalable MOPs and learnable MOEAs, followed by an analysis of the challenges
that scaling up MOPs pose to traditional MOEAs. Then, we synthetically overview
recent advances of learnable MOEAs in solving various scaling up MOPs, focusing
primarily on three attractive and promising directions (i.e., learnable
evolutionary discriminators for environmental selection, learnable evolutionary
generators for reproduction, and learnable evolutionary transfer for sharing or
reusing optimization experience between different problem domains). The insight
into learnable MOEAs held throughout this paper is offered to the readers as a
reference to the general track of the efforts in this field.
Related papers
- On-the-fly Modulation for Balanced Multimodal Learning [53.616094855778954]
Multimodal learning is expected to boost model performance by integrating information from different modalities.
The widely-used joint training strategy leads to imbalanced and under-optimized uni-modal representations.
We propose On-the-fly Prediction Modulation (OPM) and On-the-fly Gradient Modulation (OGM) strategies to modulate the optimization of each modality.
arXiv Detail & Related papers (2024-10-15T13:15:50Z) - Coding for Intelligence from the Perspective of Category [66.14012258680992]
Coding targets compressing and reconstructing data, and intelligence.
Recent trends demonstrate the potential homogeneity of these two fields.
We propose a novel problem of Coding for Intelligence from the category theory view.
arXiv Detail & Related papers (2024-07-01T07:05:44Z) - A Survey of Decomposition-Based Evolutionary Multi-Objective Optimization: Part I-Past and Future [5.074835777266041]
decomposition was not properly studied in the context of evolutionary multi-objective optimization.
MoEA/D is the representative of decomposition-based EMO to review the up-to-date development in this area.
In the first part, we present a comprehensive survey of the development of MOEA/D from its origin to the current state-of-the-art approaches.
In the final part, we shed some light on emerging directions for future developments.
arXiv Detail & Related papers (2024-04-22T20:34:46Z) - A Survey on Self-Evolution of Large Language Models [116.54238664264928]
Large language models (LLMs) have significantly advanced in various fields and intelligent agent applications.
To address this issue, self-evolution approaches that enable LLMs to autonomously acquire, refine, and learn from experiences generated by the model itself are rapidly growing.
arXiv Detail & Related papers (2024-04-22T17:43:23Z) - On the Challenges and Opportunities in Generative AI [135.2754367149689]
We argue that current large-scale generative AI models do not sufficiently address several fundamental issues that hinder their widespread adoption across domains.
In this work, we aim to identify key unresolved challenges in modern generative AI paradigms that should be tackled to further enhance their capabilities, versatility, and reliability.
arXiv Detail & Related papers (2024-02-28T15:19:33Z) - When large language models meet evolutionary algorithms [48.213640761641926]
Pre-trained large language models (LLMs) have powerful capabilities for generating creative natural text.
Evolutionary algorithms (EAs) can discover diverse solutions to complex real-world problems.
Motivated by the common collective and directionality of text generation and evolution, this paper illustrates the parallels between LLMs and EAs.
arXiv Detail & Related papers (2024-01-19T05:58:30Z) - Pre-Evolved Model for Complex Multi-objective Optimization Problems [3.784829029016233]
Multi-objective optimization problems (MOPs) necessitate the simultaneous optimization of multiple objectives.
This paper proposes the concept of pre-evolving for MOEAs to generate high-quality populations for diverse complex MOPs.
arXiv Detail & Related papers (2023-12-11T05:16:58Z) - MinT: Boosting Generalization in Mathematical Reasoning via Multi-View
Fine-Tuning [53.90744622542961]
Reasoning in mathematical domains remains a significant challenge for small language models (LMs)
We introduce a new method that exploits existing mathematical problem datasets with diverse annotation styles.
Experimental results show that our strategy enables a LLaMA-7B model to outperform prior approaches.
arXiv Detail & Related papers (2023-07-16T05:41:53Z) - Enhanced Sampling with Machine Learning: A Review [0.0]
Molecular dynamics (MD) enables the study of physical sampling systems with excellent resolution but suffers from severe time-scale limitations.
To address this, enhanced sampling methods have been developed to improve explorationtemporalal space.
In recent years, integration of machine learning (ML) techniques in different domains has shown promise.
This review explores the merging of ML and enhanced MD by presenting different shared viewpoints.
arXiv Detail & Related papers (2023-06-15T13:13:56Z) - Decomposition Multi-Objective Evolutionary Optimization: From
State-of-the-Art to Future Opportunities [5.760976250387322]
We present a survey of the development of MOEA/D from its origin to the current state-of-the-art approaches.
selected major developments of MOEA/D are reviewed according to its core design components.
We shed some lights on emerging directions for future developments.
arXiv Detail & Related papers (2021-08-21T22:21:44Z) - Hybrid Adaptive Evolutionary Algorithm for Multi-objective Optimization [0.0]
This paper proposed a new Multi-objective Algorithm as an extension of the Hybrid Adaptive Evolutionary algorithm (HAEA) called MoHAEA.
MoHAEA is compared with four states of the art MOEAs, namely MOEA/D, pa$lambda$-MOEA/D, MOEA/D-AWA, and NSGA-II.
arXiv Detail & Related papers (2020-04-29T02:16:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.