DEFT-FUNNEL: an open-source global optimization solver for constrained
grey-box and black-box problems
- URL: http://arxiv.org/abs/1912.12637v1
- Date: Sun, 29 Dec 2019 11:43:53 GMT
- Title: DEFT-FUNNEL: an open-source global optimization solver for constrained
grey-box and black-box problems
- Authors: Phillipe R. Sampaio
- Abstract summary: DEFT-FUNNEL is an open-source global optimization algorithm for general constrained grey-box and black-box problems.
The code as well as the test sets used for experiments are available at the Github repository.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The fast-growing need for grey-box and black-box optimization methods for
constrained global optimization problems in fields such as medicine, chemistry,
engineering and artificial intelligence, has contributed for the design of new
efficient algorithms for finding the best possible solution. In this work, we
present DEFT-FUNNEL, an open-source global optimization algorithm for general
constrained grey-box and black-box problems that belongs to the class of
trust-region sequential quadratic optimization algorithms. It extends the
previous works by Sampaio and Toint (2015, 2016) to a global optimization
solver that is able to exploit information from closed-form functions.
Polynomial interpolation models are used as surrogates for the black-box
functions and a clustering-based multistart strategy is applied for searching
for the global minima. Numerical experiments show that DEFT-FUNNEL compares
favorably with other state-of-the-art methods on two sets of benchmark
problems: one set containing problems where every function is a black box and
another set with problems where some of the functions and their derivatives are
known to the solver. The code as well as the test sets used for experiments are
available at the Github repository http://github.com/phrsampaio/deft-funnel.
Related papers
- Bayesian Optimization of Expensive Nested Grey-Box Functions [11.523746174066702]
We consider the problem of optimizing a grey-box objective function composed of both black-box and white-box functions.
A general formulation for such grey-box problems is given, which covers the existing grey-box optimization formulations as special cases.
We then design an optimism-driven algorithm to solve it.
arXiv Detail & Related papers (2023-06-08T12:18:18Z) - Let the Flows Tell: Solving Graph Combinatorial Optimization Problems
with GFlowNets [86.43523688236077]
Combinatorial optimization (CO) problems are often NP-hard and out of reach for exact algorithms.
GFlowNets have emerged as a powerful machinery to efficiently sample from composite unnormalized densities sequentially.
In this paper, we design Markov decision processes (MDPs) for different problems and propose to train conditional GFlowNets to sample from the solution space.
arXiv Detail & Related papers (2023-05-26T15:13:09Z) - Sarah Frank-Wolfe: Methods for Constrained Optimization with Best Rates and Practical Features [65.64276393443346]
The Frank-Wolfe (FW) method is a popular approach for solving optimization problems with structured constraints.
We present two new variants of the algorithms for minimization of the finite-sum gradient.
arXiv Detail & Related papers (2023-04-23T20:05:09Z) - Enhanced Opposition Differential Evolution Algorithm for Multimodal
Optimization [0.2538209532048866]
Most of the real-world problems are multimodal in nature that consists of multiple optimum values.
Classical gradient-based methods fail for optimization problems in which the objective functions are either discontinuous or non-differentiable.
We have proposed an algorithm known as Enhanced Opposition Differential Evolution (EODE) algorithm to solve the MMOPs.
arXiv Detail & Related papers (2022-08-23T16:18:27Z) - A Constrained Optimization Approach to Bilevel Optimization with
Multiple Inner Minima [49.320758794766185]
We propose a new approach, which convert the bilevel problem to an equivalent constrained optimization, and then the primal-dual algorithm can be used to solve the problem.
Such an approach enjoys a few advantages including (a) addresses the multiple inner minima challenge; (b) fully first-order efficiency without Jacobian computations.
arXiv Detail & Related papers (2022-03-01T18:20:01Z) - Continuation Newton methods with deflation techniques for global
optimization problems [3.705839280172101]
A global minimum point of an optimization problem is of interest in engineering.
In this article, we consider a new memetic algorithm for this nonlinear largescale problem.
According to our numerical experiments, new algorithm works well for unconstrained unconstrained problems.
arXiv Detail & Related papers (2021-07-29T09:53:49Z) - Learning Primal Heuristics for Mixed Integer Programs [5.766851255770718]
We investigate whether effective primals can be automatically learned via machine learning.
We propose a new method to represent an optimization problem as a graph, and train a Graph Conal Network on solved problem instances with known optimal solutions.
The prediction of variable solutions is then leveraged by a novel configuration of the B&B method, Probabilistic Branching with guided Depth-first Search.
arXiv Detail & Related papers (2021-07-02T06:46:23Z) - Lower Bounds and Optimal Algorithms for Smooth and Strongly Convex
Decentralized Optimization Over Time-Varying Networks [79.16773494166644]
We consider the task of minimizing the sum of smooth and strongly convex functions stored in a decentralized manner across the nodes of a communication network.
We design two optimal algorithms that attain these lower bounds.
We corroborate the theoretical efficiency of these algorithms by performing an experimental comparison with existing state-of-the-art methods.
arXiv Detail & Related papers (2021-06-08T15:54:44Z) - Bayesian Algorithm Execution: Estimating Computable Properties of
Black-box Functions Using Mutual Information [78.78486761923855]
In many real world problems, we want to infer some property of an expensive black-box function f, given a budget of T function evaluations.
We present a procedure, InfoBAX, that sequentially chooses queries that maximize mutual information with respect to the algorithm's output.
On these problems, InfoBAX uses up to 500 times fewer queries to f than required by the original algorithm.
arXiv Detail & Related papers (2021-04-19T17:22:11Z) - Incorporating Expert Prior in Bayesian Optimisation via Space Warping [54.412024556499254]
In big search spaces the algorithm goes through several low function value regions before reaching the optimum of the function.
One approach to subside this cold start phase is to use prior knowledge that can accelerate the optimisation.
In this paper, we represent the prior knowledge about the function optimum through a prior distribution.
The prior distribution is then used to warp the search space in such a way that space gets expanded around the high probability region of function optimum and shrinks around low probability region of optimum.
arXiv Detail & Related papers (2020-03-27T06:18:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.