Paradiseo: From a Modular Framework for Evolutionary Computation to the
Automated Design of Metaheuristics ---22 Years of Paradiseo---
- URL: http://arxiv.org/abs/2105.00420v1
- Date: Sun, 2 May 2021 08:45:33 GMT
- Title: Paradiseo: From a Modular Framework for Evolutionary Computation to the
Automated Design of Metaheuristics ---22 Years of Paradiseo---
- Authors: Johann Dreo (Systems Biology Group, Department of Computational
Biology, USR 3756, Institut Pasteur and CNRS, Paris, France), Arnaud
Liefooghe (Univ. Lille, CNRS, Inria, Centrale Lille, UMR 9189 CRIStAL, Lille,
France), S\'ebastien Verel (Univ. Littoral C\^ote d'Opale, Calais, France),
Marc Schoenauer (TAU, Inria, CNRS and UPSaclay, LISN, Saclay, France), Juan
J. Merelo (University of Granada, Granada, Spain), Alexandre Quemy (Poznan
University of Technology, Poznan, Poland), Benjamin Bouvier, Jan Gmys (Inria,
Lille, France)
- Abstract summary: ParadisEO is a comprehensive C++ free software which targets the development of modular metaheuristics.
This article summarizes the features of the ParadisEO framework, a comprehensive C++ free software which targets the development of modular metaheuristics.
- Score: 33.056531655247625
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The success of metaheuristic optimization methods has led to the development
of a large variety of algorithm paradigms. However, no algorithm clearly
dominates all its competitors on all problems. Instead, the underlying variety
of landscapes of optimization problems calls for a variety of algorithms to
solve them efficiently. It is thus of prior importance to have access to mature
and flexible software frameworks which allow for an efficient exploration of
the algorithm design space. Such frameworks should be flexible enough to
accommodate any kind of metaheuristics, and open enough to connect with
higher-level optimization, monitoring and evaluation softwares. This article
summarizes the features of the ParadisEO framework, a comprehensive C++ free
software which targets the development of modular metaheuristics. ParadisEO
provides a highly modular architecture, a large set of components, speed of
execution and automated algorithm design features, which are key to modern
approaches to metaheuristics development.
Related papers
- From Understanding to Excelling: Template-Free Algorithm Design through Structural-Functional Co-Evolution [39.42526347710991]
Large language models (LLMs) have greatly accelerated the automation of algorithm generation and optimization.
We introduce an end-to-end algorithm generation and optimization framework based on LLMs.
Our approach utilizes the deep semantic understanding of LLMs to convert natural language requirements or human-authored papers into code solutions.
arXiv Detail & Related papers (2025-03-13T08:26:18Z) - Automatic Operator-level Parallelism Planning for Distributed Deep Learning -- A Mixed-Integer Programming Approach [6.449961842220686]
We propose a bi-level solution framework balancing optimality with computational efficiency.
Our framework achieves comparable or superior performance, reducing computational bubbles by half under the same memory constraints.
Such capabilities position our solution as both a valuable research tool for exploring optimal parallelization strategies and a practical industrial solution for large-scale AI deployment.
arXiv Detail & Related papers (2025-03-12T13:00:29Z) - METAFOR: A Hybrid Metaheuristics Software Framework for Single-Objective Continuous Optimization Problems [0.1053373860696675]
We propose a modular metaheuristic software framework, called METAFOR, that can be coupled with an automatic algorithm configuration tool to automatically design hybrid metaheuristics.
We use the configuration tool irace to automatically generate 17 different metaheuristic implementations and evaluate their performance on a diverse set of continuous optimization problems.
arXiv Detail & Related papers (2025-02-16T18:24:44Z) - A Survey on Inference Optimization Techniques for Mixture of Experts Models [50.40325411764262]
Large-scale Mixture of Experts (MoE) models offer enhanced model capacity and computational efficiency through conditional computation.
deploying and running inference on these models presents significant challenges in computational resources, latency, and energy efficiency.
This survey analyzes optimization techniques for MoE models across the entire system stack.
arXiv Detail & Related papers (2024-12-18T14:11:15Z) - STAR: Synthesis of Tailored Architectures [61.080157488857516]
We propose a new approach for the synthesis of tailored architectures (STAR)
Our approach combines a novel search space based on the theory of linear input-varying systems, supporting a hierarchical numerical encoding into architecture genomes. STAR genomes are automatically refined and recombined with gradient-free, evolutionary algorithms to optimize for multiple model quality and efficiency metrics.
Using STAR, we optimize large populations of new architectures, leveraging diverse computational units and interconnection patterns, improving over highly-optimized Transformers and striped hybrid models on the frontier of quality, parameter size, and inference cache for autoregressive language modeling.
arXiv Detail & Related papers (2024-11-26T18:42:42Z) - Iterative or Innovative? A Problem-Oriented Perspective for Code Optimization [81.88668100203913]
Large language models (LLMs) have demonstrated strong capabilities in solving a wide range of programming tasks.
In this paper, we explore code optimization with a focus on performance enhancement, specifically aiming to optimize code for minimal execution time.
arXiv Detail & Related papers (2024-06-17T16:10:10Z) - Reinforced In-Context Black-Box Optimization [64.25546325063272]
RIBBO is a method to reinforce-learn a BBO algorithm from offline data in an end-to-end fashion.
RIBBO employs expressive sequence models to learn the optimization histories produced by multiple behavior algorithms and tasks.
Central to our method is to augment the optimization histories with textitregret-to-go tokens, which are designed to represent the performance of an algorithm based on cumulative regret over the future part of the histories.
arXiv Detail & Related papers (2024-02-27T11:32:14Z) - Machine Learning Augmented Branch and Bound for Mixed Integer Linear
Programming [11.293025183996832]
Mixed Linear Programming (MILP) offers a powerful modeling language for a wide range of applications.
In recent years, there has been an explosive development in the use of machine learning algorithms for enhancing all main tasks involved in the branch-and-bound algorithm.
In particular, we give detailed attention to machine learning algorithms that automatically optimize some metric of branch-and-bound efficiency.
arXiv Detail & Related papers (2024-02-08T09:19:26Z) - Towards a Systems Theory of Algorithms [9.4471844989393]
We argue in favor of viewing algorithms as open dynamical systems interacting with other algorithms, physical systems, humans, or databases.
Remarkably, the manifold tools developed under the umbrella of systems theory are well suited for addressing a range of challenges in the algorithmic domain.
arXiv Detail & Related papers (2024-01-25T09:20:21Z) - Introducing Interactions in Multi-Objective Optimization of Software
Architectures [2.920908475492581]
This study investigates the impact of designer interactions on software architecture optimization.
By directing the search towards regions of interest, the interaction uncovers architectures that remain unexplored in the fully automated process.
arXiv Detail & Related papers (2023-08-29T07:49:46Z) - MOF: A Modular Framework for Rapid Application of Optimization
Methodologies to General Engineering Design Problems [0.0]
The Modular Optimization Framework (MOF) was developed to facilitate the development and application of optimization algorithms.
MOF is written in Python 3, and it used object-oriented programming to create a modular design that allows users to easily incorporate new optimization algorithms.
arXiv Detail & Related papers (2022-04-01T00:02:30Z) - Design-Bench: Benchmarks for Data-Driven Offline Model-Based
Optimization [82.02008764719896]
Black-box model-based optimization problems are ubiquitous in a wide range of domains, such as the design of proteins, DNA sequences, aircraft, and robots.
We present Design-Bench, a benchmark for offline MBO with a unified evaluation protocol and reference implementations of recent methods.
Our benchmark includes a suite of diverse and realistic tasks derived from real-world optimization problems in biology, materials science, and robotics.
arXiv Detail & Related papers (2022-02-17T05:33:27Z) - Optimization-Inspired Learning with Architecture Augmentations and
Control Mechanisms for Low-Level Vision [74.9260745577362]
This paper proposes a unified optimization-inspired learning framework to aggregate Generative, Discriminative, and Corrective (GDC) principles.
We construct three propagative modules to effectively solve the optimization models with flexible combinations.
Experiments across varied low-level vision tasks validate the efficacy and adaptability of GDC.
arXiv Detail & Related papers (2020-12-10T03:24:53Z) - Generalized and Scalable Optimal Sparse Decision Trees [56.35541305670828]
We present techniques that produce optimal decision trees over a variety of objectives.
We also introduce a scalable algorithm that produces provably optimal results in the presence of continuous variables.
arXiv Detail & Related papers (2020-06-15T19:00:11Z) - Automatic Generation of Algorithms for Black-Box Robust Optimisation
Problems [0.0]
We develop algorithms capable of tackling robust black-box optimisation problems, where the number of model runs is limited.
We employ an automatic generation of algorithms approach: Grammar-Guided Genetic Programming.
Our algorithmic building blocks combine elements of existing techniques and new features, resulting in the investigation of a novel solution space.
arXiv Detail & Related papers (2020-04-15T18:51:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.