Bayesian optimization of variable-size design space problems
- URL: http://arxiv.org/abs/2003.03300v1
- Date: Fri, 6 Mar 2020 16:30:44 GMT
- Title: Bayesian optimization of variable-size design space problems
- Authors: Julien Pelamatti, Loic Brevault, Mathieu Balesdent, El-Ghazali Talbi,
Yannick Guerin
- Abstract summary: Two alternative Bayesian Optimization-based approaches are proposed in order to solve this type of optimization problems.
The first approach consists in a budget allocation strategy allowing to focus the computational budget on the most promising design sub-spaces.
The second approach, instead, is based on the definition of a kernel function allowing to compute the covariance between samples characterized by partially different sets of variables.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Within the framework of complex system design, it is often necessary to solve
mixed variable optimization problems, in which the objective and constraint
functions can depend simultaneously on continuous and discrete variables.
Additionally, complex system design problems occasionally present a
variable-size design space. This results in an optimization problem for which
the search space varies dynamically (with respect to both number and type of
variables) along the optimization process as a function of the values of
specific discrete decision variables. Similarly, the number and type of
constraints can vary as well. In this paper, two alternative Bayesian
Optimization-based approaches are proposed in order to solve this type of
optimization problems. The first one consists in a budget allocation strategy
allowing to focus the computational budget on the most promising design
sub-spaces. The second approach, instead, is based on the definition of a
kernel function allowing to compute the covariance between samples
characterized by partially different sets of variables. The results obtained on
analytical and engineering related test-cases show a faster and more consistent
convergence of both proposed methods with respect to the standard approaches.
Related papers
- Simultaneous and Meshfree Topology Optimization with Physics-informed Gaussian Processes [0.0]
Topology optimization (TO) provides a principled mathematical approach for optimizing the performance of a structure by designing its material spatial distribution in a pre-defined domain and subject to a set of constraints.
We develop a new class of TO methods based on the framework of Gaussian processes (GPs) whose mean functions are parameterized via deep neural networks.
To test our method against conventional TO approaches implemented in commercial software, we evaluate it on four problems involving the minimization of dissipated power in Stokes flow.
arXiv Detail & Related papers (2024-08-07T01:01:35Z) - Analyzing and Enhancing the Backward-Pass Convergence of Unrolled
Optimization [50.38518771642365]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
A central challenge in this setting is backpropagation through the solution of an optimization problem, which often lacks a closed form.
This paper provides theoretical insights into the backward pass of unrolled optimization, showing that it is equivalent to the solution of a linear system by a particular iterative method.
A system called Folded Optimization is proposed to construct more efficient backpropagation rules from unrolled solver implementations.
arXiv Detail & Related papers (2023-12-28T23:15:18Z) - Accelerating Cutting-Plane Algorithms via Reinforcement Learning
Surrogates [49.84541884653309]
A current standard approach to solving convex discrete optimization problems is the use of cutting-plane algorithms.
Despite the existence of a number of general-purpose cut-generating algorithms, large-scale discrete optimization problems continue to suffer from intractability.
We propose a method for accelerating cutting-plane algorithms via reinforcement learning.
arXiv Detail & Related papers (2023-07-17T20:11:56Z) - Comparison of Single- and Multi- Objective Optimization Quality for
Evolutionary Equation Discovery [77.34726150561087]
Evolutionary differential equation discovery proved to be a tool to obtain equations with less a priori assumptions.
The proposed comparison approach is shown on classical model examples -- Burgers equation, wave equation, and Korteweg - de Vries equation.
arXiv Detail & Related papers (2023-06-29T15:37:19Z) - Numerical Methods for Convex Multistage Stochastic Optimization [86.45244607927732]
We focus on optimisation programming (SP), Optimal Control (SOC) and Decision Processes (MDP)
Recent progress in solving convex multistage Markov problems is based on cutting planes approximations of the cost-to-go functions of dynamic programming equations.
Cutting plane type methods can handle multistage problems with a large number of stages, but a relatively smaller number of state (decision) variables.
arXiv Detail & Related papers (2023-03-28T01:30:40Z) - Scalable Bayesian optimization with high-dimensional outputs using
randomized prior networks [3.0468934705223774]
We propose a deep learning framework for BO and sequential decision making based on bootstrapped ensembles of neural architectures with randomized priors.
We show that the proposed framework can approximate functional relationships between design variables and quantities of interest, even in cases where the latter take values in high-dimensional vector spaces or even infinite-dimensional function spaces.
We test the proposed framework against state-of-the-art methods for BO and demonstrate superior performance across several challenging tasks with high-dimensional outputs.
arXiv Detail & Related papers (2023-02-14T18:55:21Z) - Backpropagation of Unrolled Solvers with Folded Optimization [55.04219793298687]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
One typical strategy is algorithm unrolling, which relies on automatic differentiation through the operations of an iterative solver.
This paper provides theoretical insights into the backward pass of unrolled optimization, leading to a system for generating efficiently solvable analytical models of backpropagation.
arXiv Detail & Related papers (2023-01-28T01:50:42Z) - Automated Circuit Sizing with Multi-objective Optimization based on
Differential Evolution and Bayesian Inference [1.1579778934294358]
We introduce a design optimization method based on Generalized Differential Evolution 3 (GDE3) and Gaussian Processes (GPs)
The proposed method is able to perform sizing for complex circuits with a large number of design variables and many conflicting objectives to be optimized.
We evaluate the introduced method on two voltage regulators showing different levels of complexity.
arXiv Detail & Related papers (2022-06-06T06:48:45Z) - Variable Functioning and Its Application to Large Scale Steel Frame
Design Optimization [15.86197261674868]
A concept-based approach called variable functioning Fx is introduced to reduce the optimization variables and narrow down the search space.
By using problem structure analysis technique and engineering expert knowledge, the $Fx$ method is used to enhance the steel frame design optimization process.
arXiv Detail & Related papers (2022-05-15T12:43:25Z) - Nonequilibrium Monte Carlo for unfreezing variables in hard
combinatorial optimization [1.1783108699768]
We introduce a quantum-inspired family of nonlocal Nonequilibrium Monte Carlo (NMC) algorithms by developing an adaptive gradient-free strategy.
We observe significant speedup and robustness over both specialized solvers and generic solvers.
arXiv Detail & Related papers (2021-11-26T17:45:32Z) - Distributed Averaging Methods for Randomized Second Order Optimization [54.51566432934556]
We consider distributed optimization problems where forming the Hessian is computationally challenging and communication is a bottleneck.
We develop unbiased parameter averaging methods for randomized second order optimization that employ sampling and sketching of the Hessian.
We also extend the framework of second order averaging methods to introduce an unbiased distributed optimization framework for heterogeneous computing systems.
arXiv Detail & Related papers (2020-02-16T09:01:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.