Global optimization using random embeddings
- URL: http://arxiv.org/abs/2107.12102v1
- Date: Mon, 26 Jul 2021 10:45:49 GMT
- Title: Global optimization using random embeddings
- Authors: Coralia Cartis, Estelle Massart, Adilet Otemissov
- Abstract summary: We propose a random-subspace algorithmic framework for global optimization of Lipschitz-continuous objectives.
X-REGO randomly projects, in a sequential or simultaneous manner, the high-dimensional original problem into low-dimensional subproblems.
We show numerically that this variant efficiently finds both the effective dimension and an approximate global minimizer of the original problem.
- Score: 1.2891210250935143
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a random-subspace algorithmic framework for global optimization of
Lipschitz-continuous objectives, and analyse its convergence using novel tools
from conic integral geometry. X-REGO randomly projects, in a sequential or
simultaneous manner, the high-dimensional original problem into low-dimensional
subproblems that can then be solved with any global, or even local,
optimization solver. We estimate the probability that the randomly-embedded
subproblem shares (approximately) the same global optimum as the original
problem. This success probability is then used to show convergence of X-REGO to
an approximate global solution of the original problem, under weak assumptions
on the problem (having a strictly feasible global solution) and on the solver
(guaranteed to find an approximate global solution of the reduced problem with
sufficiently high probability). In the particular case of unconstrained
objectives with low effective dimension, that only vary over a low-dimensional
subspace, we propose an X-REGO variant that explores random subspaces of
increasing dimension until finding the effective dimension of the problem,
leading to X-REGO globally converging after a finite number of embeddings,
proportional to the effective dimension. We show numerically that this variant
efficiently finds both the effective dimension and an approximate global
minimizer of the original problem.
Related papers
- Distributionally Robust Optimization via Iterative Algorithms in Continuous Probability Spaces [6.992239210938067]
We consider a minimax problem motivated by distributionally robust optimization (DRO) when the worst-case distribution is continuous.
Recent research has explored learning the worst-case distribution using neural network-based generative networks.
This paper bridges this theoretical challenge by presenting an iterative algorithm to solve such a minimax problem.
arXiv Detail & Related papers (2024-12-29T19:31:23Z) - ProGO: Probabilistic Global Optimizer [9.772380490791635]
In this paper we develop an algorithm that converges to the global optima under some mild conditions.
We show that the proposed algorithm outperforms, by order of magnitude, many existing state-of-the-art methods.
arXiv Detail & Related papers (2023-10-04T22:23:40Z) - Multi-Phase Relaxation Labeling for Square Jigsaw Puzzle Solving [73.58829980121767]
We present a novel method for solving square jigsaw puzzles based on global optimization.
The method is fully automatic, assumes no prior information, and can handle puzzles with known or unknown piece orientation.
arXiv Detail & Related papers (2023-03-26T18:53:51Z) - Fully Stochastic Trust-Region Sequential Quadratic Programming for
Equality-Constrained Optimization Problems [62.83783246648714]
We propose a sequential quadratic programming algorithm (TR-StoSQP) to solve nonlinear optimization problems with objectives and deterministic equality constraints.
The algorithm adaptively selects the trust-region radius and, compared to the existing line-search StoSQP schemes, allows us to utilize indefinite Hessian matrices.
arXiv Detail & Related papers (2022-11-29T05:52:17Z) - A Dimensionality Reduction Method for Finding Least Favorable Priors
with a Focus on Bregman Divergence [108.28566246421742]
This paper develops a dimensionality reduction method that allows us to move the optimization to a finite-dimensional setting with an explicit bound on the dimension.
In order to make progress on the problem, we restrict ourselves to Bayesian risks induced by a relatively large class of loss functions, namely Bregman divergences.
arXiv Detail & Related papers (2022-02-23T16:22:28Z) - A framework for bilevel optimization that enables stochastic and global variance reduction algorithms [21.67411847762289]
Bilevel optimization is a problem of minimizing a value function which involves the arg-minimum of another function.
We introduce a novel framework, in which the solution of the inner problem, the solution of the linear system, and the main variable evolve at the same time.
We demonstrate that SABA, an adaptation of the celebrated SAGA algorithm in our framework, has $O(frac1T)$ convergence rate.
arXiv Detail & Related papers (2022-01-31T18:17:25Z) - Distributed stochastic proximal algorithm with random reshuffling for
non-smooth finite-sum optimization [28.862321453597918]
Non-smooth finite-sum minimization is a fundamental problem in machine learning.
This paper develops a distributed proximal-gradient algorithm with random reshuffling to solve the problem.
arXiv Detail & Related papers (2021-11-06T07:29:55Z) - Runtime Analysis of Single- and Multi-Objective Evolutionary Algorithms for Chance Constrained Optimization Problems with Normally Distributed Random Variables [11.310502327308575]
We study the scenario of components that are independent and normally distributed.
We introduce a multi-objective formulation of the problem which trades off the expected cost and its variance.
We prove that this approach can also be used to compute a set of optimal solutions for the chance constrained minimum spanning tree problem.
arXiv Detail & Related papers (2021-09-13T09:24:23Z) - Efficient Methods for Structured Nonconvex-Nonconcave Min-Max
Optimization [98.0595480384208]
We propose a generalization extraient spaces which converges to a stationary point.
The algorithm applies not only to general $p$-normed spaces, but also to general $p$-dimensional vector spaces.
arXiv Detail & Related papers (2020-10-31T21:35:42Z) - Stochastic Saddle-Point Optimization for Wasserstein Barycenters [69.68068088508505]
We consider the populationimation barycenter problem for random probability measures supported on a finite set of points and generated by an online stream of data.
We employ the structure of the problem and obtain a convex-concave saddle-point reformulation of this problem.
In the setting when the distribution of random probability measures is discrete, we propose an optimization algorithm and estimate its complexity.
arXiv Detail & Related papers (2020-06-11T19:40:38Z) - Effective Dimension Adaptive Sketching Methods for Faster Regularized
Least-Squares Optimization [56.05635751529922]
We propose a new randomized algorithm for solving L2-regularized least-squares problems based on sketching.
We consider two of the most popular random embeddings, namely, Gaussian embeddings and the Subsampled Randomized Hadamard Transform (SRHT)
arXiv Detail & Related papers (2020-06-10T15:00:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.