DECN: Evolution Inspired Deep Convolution Network for Black-box Optimization
- URL: http://arxiv.org/abs/2304.09599v4
- Date: Mon, 23 Dec 2024 09:33:13 GMT
- Title: DECN: Evolution Inspired Deep Convolution Network for Black-box Optimization
- Authors: Kai Wu, Xiaobin Li, Penghui Liu, Jing Liu,
- Abstract summary: This paper introduces the concept of Automated EA: Automated EA exploits structure in the problem of interest to automatically generate update rules.<n>We design a deep evolutionary convolution network (DECN) to realize the move from hand-designed EAs to automated EAs without manual interventions.
- Score: 9.878660285945728
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Evolutionary algorithms (EAs) have emerged as a powerful framework for optimization, especially for black-box optimization. Existing evolutionary algorithms struggle to comprehend and effectively utilize task-specific information for adjusting their optimization strategies, leading to subpar performance on target tasks. Moreover, optimization strategies devised by experts tend to be highly biased. These challenges significantly impede the progress of the field of evolutionary computation. Therefore, this paper first introduces the concept of Automated EA: Automated EA exploits structure in the problem of interest to automatically generate update rules (optimization strategies) for generating and selecting potential solutions so that it can move a random population near the optimal solution. However, current EAs cannot achieve this goal due to the poor representation of the optimization strategy and the weak interaction between the optimization strategy and the target task. We design a deep evolutionary convolution network (DECN) to realize the move from hand-designed EAs to automated EAs without manual interventions. DECN has high adaptability to the target task and can obtain better solutions with less computational cost. DECN is also able to effectively utilize the low-fidelity information of the target task to form an efficient optimization strategy. The experiments on nine synthetics and two real-world cases show the advantages of learned optimization strategies over the state-of-the-art human-designed and meta-learning EA baselines. In addition, due to the tensorization of the operations, DECN is friendly to the acceleration provided by GPUs and runs 102 times faster than EA.
Related papers
- Advancing CMA-ES with Learning-Based Cooperative Coevolution for Scalable Optimization [12.899626317088885]
This paper introduces LCC, a pioneering learning-based cooperative coevolution framework.
LCC dynamically schedules decomposition strategies during optimization processes.
It offers certain advantages over state-of-the-art baselines in terms of optimization effectiveness and resource consumption.
arXiv Detail & Related papers (2025-04-24T14:09:22Z) - MetaML-Pro: Cross-Stage Design Flow Automation for Efficient Deep Learning Acceleration [8.43012094714496]
This paper presents a unified framework for codifying and automating optimization strategies to deploy deep neural networks (DNNs) on resource-constrained hardware.
Our novel approach addresses two key issues: cross-stage co-optimization and optimization search.
Experimental results demonstrate up to a 92% DSP and 89% LUT usage reduction for select networks.
arXiv Detail & Related papers (2025-02-09T11:02:06Z) - Reinforcement learning Based Automated Design of Differential Evolution Algorithm for Black-box Optimization [14.116216795259554]
Differential evolution (DE) algorithm is recognized as one of the most effective evolutionary algorithms.
We introduce a novel framework that employs reinforcement learning (RL) to automatically design DE for black-box optimization.
RL acts as an advanced meta-optimizer, generating a customized DE configuration.
arXiv Detail & Related papers (2025-01-22T13:41:47Z) - A Survey on Inference Optimization Techniques for Mixture of Experts Models [50.40325411764262]
Large-scale Mixture of Experts (MoE) models offer enhanced model capacity and computational efficiency through conditional computation.
deploying and running inference on these models presents significant challenges in computational resources, latency, and energy efficiency.
This survey analyzes optimization techniques for MoE models across the entire system stack.
arXiv Detail & Related papers (2024-12-18T14:11:15Z) - Deep Insights into Automated Optimization with Large Language Models and Evolutionary Algorithms [3.833708891059351]
Large Language Models (LLMs) and Evolutionary Algorithms (EAs) offer promising new approach to overcome limitations and make optimization more automated.
LLMs act as dynamic agents that can generate, refine, and interpret optimization strategies.
EAs efficiently explore complex solution spaces through evolutionary operators.
arXiv Detail & Related papers (2024-10-28T09:04:49Z) - Enhanced Optimization Strategies to Design an Underactuated Hand Exoskeleton [0.7639610349097473]
This study presents the design process for an underactuated hand exoskeleton (U-HEx)
The optimization relies on a Genetic Algorithm, the Big Bang-Big Crunch Algorithm, and their versions for multi-objective optimization.
arXiv Detail & Related papers (2024-08-14T09:00:49Z) - Iterative or Innovative? A Problem-Oriented Perspective for Code Optimization [81.88668100203913]
Large language models (LLMs) have demonstrated strong capabilities in solving a wide range of programming tasks.
In this paper, we explore code optimization with a focus on performance enhancement, specifically aiming to optimize code for minimal execution time.
arXiv Detail & Related papers (2024-06-17T16:10:10Z) - MetaML: Automating Customizable Cross-Stage Design-Flow for Deep
Learning Acceleration [5.2487252195308844]
This paper introduces a novel optimization framework for deep neural network (DNN) hardware accelerators.
We introduce novel optimization and transformation tasks for building design-flow architectures.
Our results demonstrate considerable reductions of up to 92% in DSP usage and 89% in LUT usage for two networks.
arXiv Detail & Related papers (2023-06-14T21:06:07Z) - Evolutionary Solution Adaption for Multi-Objective Metal Cutting Process
Optimization [59.45414406974091]
We introduce a framework for system flexibility that allows us to study the ability of an algorithm to transfer solutions from previous optimization tasks.
We study the flexibility of NSGA-II, which we extend by two variants: 1) varying goals, that optimize solutions for two tasks simultaneously to obtain in-between source solutions expected to be more adaptable, and 2) active-inactive genotype, that accommodates different possibilities that can be activated or deactivated.
Results show that adaption with standard NSGA-II greatly reduces the number of evaluations required for optimization to a target goal, while the proposed variants further improve the adaption costs.
arXiv Detail & Related papers (2023-05-31T12:07:50Z) - B2Opt: Learning to Optimize Black-box Optimization with Little Budget [15.95406229086798]
This paper designs a powerful optimization framework to automatically learn the optimization strategies from the target or cheap surrogate task without human intervention.
Deep neural network framework called B2Opt has a stronger representation of optimization strategies based on survival of the fittest.
Compared to the state-of-the-art BBO baselines, B2Opt can achieve multiple orders of magnitude performance improvement with less function evaluation cost.
arXiv Detail & Related papers (2023-04-24T01:48:01Z) - Backpropagation of Unrolled Solvers with Folded Optimization [55.04219793298687]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
One typical strategy is algorithm unrolling, which relies on automatic differentiation through the operations of an iterative solver.
This paper provides theoretical insights into the backward pass of unrolled optimization, leading to a system for generating efficiently solvable analytical models of backpropagation.
arXiv Detail & Related papers (2023-01-28T01:50:42Z) - An Empirical Evaluation of Zeroth-Order Optimization Methods on
AI-driven Molecule Optimization [78.36413169647408]
We study the effectiveness of various ZO optimization methods for optimizing molecular objectives.
We show the advantages of ZO sign-based gradient descent (ZO-signGD)
We demonstrate the potential effectiveness of ZO optimization methods on widely used benchmark tasks from the Guacamol suite.
arXiv Detail & Related papers (2022-10-27T01:58:10Z) - Evolving Pareto-Optimal Actor-Critic Algorithms for Generalizability and
Stability [67.8426046908398]
Generalizability and stability are two key objectives for operating reinforcement learning (RL) agents in the real world.
This paper presents MetaPG, an evolutionary method for automated design of actor-critic loss functions.
arXiv Detail & Related papers (2022-04-08T20:46:16Z) - Mind Your Solver! On Adversarial Attack and Defense for Combinatorial
Optimization [111.78035414744045]
We take an initiative on developing the mechanisms for adversarial attack and defense towards optimization solvers.
We present a simple yet effective defense strategy to modify the graph structure to increase the robustness of solvers.
arXiv Detail & Related papers (2021-12-28T15:10:15Z) - Transferable Graph Optimizers for ML Compilers [18.353830282858834]
We propose an end-to-end, transferable deep reinforcement learning method for computational graph optimization (GO)
GO generates decisions on the entire graph rather than on each individual node autoregressively, drastically speeding up the search compared to prior methods.
GO achieves 21% improvement over human experts and 18% improvement over the prior state of the art with 15x faster convergence.
arXiv Detail & Related papers (2020-10-21T20:28:33Z) - EOS: a Parallel, Self-Adaptive, Multi-Population Evolutionary Algorithm
for Constrained Global Optimization [68.8204255655161]
EOS is a global optimization algorithm for constrained and unconstrained problems of real-valued variables.
It implements a number of improvements to the well-known Differential Evolution (DE) algorithm.
Results prove that EOSis capable of achieving increased performance compared to state-of-the-art single-population self-adaptive DE algorithms.
arXiv Detail & Related papers (2020-07-09T10:19:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.