Evolutionary Generative Optimization: Towards Fully Data-Driven Evolutionary Optimization via Generative Learning
- URL: http://arxiv.org/abs/2508.00380v1
- Date: Fri, 01 Aug 2025 07:17:57 GMT
- Title: Evolutionary Generative Optimization: Towards Fully Data-Driven Evolutionary Optimization via Generative Learning
- Authors: Kebin Sun, Tao Jiang, Ran Cheng, Yaochu Jin, Kay Chen Tan,
- Abstract summary: We propose a fully data-driven framework empowered by generative learning.<n>EvoGO streamlines the evolutionary optimization process into three stages: data preparation, model training, and population generation.<n>Experiments on numerical benchmarks, classical control problems, and high-dimensional robotic tasks demonstrate that EvoGO consistently converges within merely 10 generations.
- Score: 41.44929681213813
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advances in data-driven evolutionary algorithms (EAs) have demonstrated the potential of leveraging data to improve optimization accuracy and adaptability. Nevertheless, most existing approaches remain dependent on handcrafted heuristics, which limits their generality and automation. To address this challenge, we propose Evolutionary Generative Optimization (EvoGO), a fully data-driven framework empowered by generative learning. EvoGO streamlines the evolutionary optimization process into three stages: data preparation, model training, and population generation. The data preparation stage constructs a pairwise dataset to enrich training diversity without incurring additional evaluation costs. During model training, a tailored generative model learns to transform inferior solutions into superior ones. In the population generation stage, EvoGO replaces traditional reproduction operators with a scalable and parallelizable generative mechanism. Extensive experiments on numerical benchmarks, classical control problems, and high-dimensional robotic tasks demonstrate that EvoGO consistently converges within merely 10 generations and significantly outperforms a wide spectrum of optimization approaches, including traditional EAs, Bayesian optimization, and reinforcement learning based methods. Source code will be made publicly available.
Related papers
- EvolvingGrasp: Evolutionary Grasp Generation via Efficient Preference Alignment [42.41408547627677]
EvolvingGrasp is an evolutionary grasp generation method that continuously enhances grasping performance through preference alignment.<n>Our results validate that EvolvingGrasp enables evolutionary grasp generation, ensuring robust, physically feasible, and preference-aligned grasping in both simulation and real scenarios.
arXiv Detail & Related papers (2025-03-18T15:01:47Z) - Learning Evolution via Optimization Knowledge Adaptation [50.280704114978384]
Evolutionary algorithms (EAs) maintain populations through evolutionary operators to discover solutions for complex tasks.<n>We introduce an Optimization Knowledge Adaptation Evolutionary Model (OKAEM) to enhance its optimization capabilities.<n>OKAEM exploits prior knowledge for significant performance gains across various knowledge transfer settings.<n>It is capable of emulating principles of natural selection and genetic recombination.
arXiv Detail & Related papers (2025-01-04T05:35:21Z) - Heuristically Adaptive Diffusion-Model Evolutionary Strategy [1.8299322342860518]
Diffusion Models represent a significant advancement in generative modeling.
Our research reveals a fundamental connection between diffusion models and evolutionary algorithms.
Our framework marks a major algorithmic transition, offering increased flexibility, precision, and control in evolutionary optimization processes.
arXiv Detail & Related papers (2024-11-20T16:06:28Z) - Adaptive Data Optimization: Dynamic Sample Selection with Scaling Laws [59.03420759554073]
We introduce Adaptive Data Optimization (ADO), an algorithm that optimize data distributions in an online fashion, concurrent with model training.
ADO does not require external knowledge, proxy models, or modifications to the model update.
ADO uses per-domain scaling laws to estimate the learning potential of each domain during training and adjusts the data mixture accordingly.
arXiv Detail & Related papers (2024-10-15T17:47:44Z) - Automatic Instruction Evolving for Large Language Models [93.52437926313621]
Auto Evol-Instruct is an end-to-end framework that evolves instruction datasets using large language models without any human effort.
Our experiments demonstrate that the best method optimized by Auto Evol-Instruct outperforms human-designed methods on various benchmarks.
arXiv Detail & Related papers (2024-06-02T15:09:00Z) - Evolutionary Large Language Model for Automated Feature Transformation [44.64296052383581]
We propose an evolutionary Large Language Model (LLM) framework for automated feature transformation.<n>This framework consists of two parts: 1) constructing a multi-population database through an RL data collector, and 2) utilizing the ability of Large Language Model (LLM) in sequence understanding.<n>We empirically demonstrate the effectiveness and generality of our proposed method.
arXiv Detail & Related papers (2024-05-25T12:27:21Z) - Evolution Transformer: In-Context Evolutionary Optimization [6.873777465945062]
We introduce Evolution Transformer, a causal Transformer architecture, which can flexibly characterize a family of Evolution Strategies.
We train the model weights using Evolutionary Algorithm Distillation, a technique for supervised optimization of sequence models.
We analyze the resulting properties of the Evolution Transformer and propose a technique to fully self-referentially train the Evolution Transformer.
arXiv Detail & Related papers (2024-03-05T14:04:13Z) - Functional Graphical Models: Structure Enables Offline Data-Driven Optimization [111.28605744661638]
We show how structure can enable sample-efficient data-driven optimization.
We also present a data-driven optimization algorithm that infers the FGM structure itself.
arXiv Detail & Related papers (2024-01-08T22:33:14Z) - A Data-Driven Evolutionary Transfer Optimization for Expensive Problems
in Dynamic Environments [9.098403098464704]
Data-driven, a.k.a. surrogate-assisted, evolutionary optimization has been recognized as an effective approach for tackling expensive black-box optimization problems.
This paper proposes a simple but effective transfer learning framework to empower data-driven evolutionary optimization to solve dynamic optimization problems.
Experiments on synthetic benchmark test problems and a real-world case study demonstrate the effectiveness of our proposed algorithm.
arXiv Detail & Related papers (2022-11-05T11:19:50Z) - Generative Data Augmentation for Commonsense Reasoning [75.26876609249197]
G-DAUGC is a novel generative data augmentation method that aims to achieve more accurate and robust learning in the low-resource setting.
G-DAUGC consistently outperforms existing data augmentation methods based on back-translation.
Our analysis demonstrates that G-DAUGC produces a diverse set of fluent training examples, and that its selection and training approaches are important for performance.
arXiv Detail & Related papers (2020-04-24T06:12:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.