META-SMGO-$\Delta$: similarity as a prior in black-box optimization
- URL: http://arxiv.org/abs/2305.00438v1
- Date: Sun, 30 Apr 2023 09:41:04 GMT
- Title: META-SMGO-$\Delta$: similarity as a prior in black-box optimization
- Authors: Riccardo Busetto, Valentina Breschi, Simone Formentin
- Abstract summary: We propose to incorporate the META-learning rationale into SMGO-$Delta$, a global optimization approach recently proposed in the literature.
We show the practical benefits of our META-extension of the baseline algorithm, while providing theoretical bounds on its performance.
- Score: 1.282675419968047
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: When solving global optimization problems in practice, one often ends up
repeatedly solving problems that are similar to each others. By providing a
rigorous definition of similarity, in this work we propose to incorporate the
META-learning rationale into SMGO-$\Delta$, a global optimization approach
recently proposed in the literature, to exploit priors obtained from similar
past experience to efficiently solve new (similar) problems. Through a
benchmark numerical example we show the practical benefits of our
META-extension of the baseline algorithm, while providing theoretical bounds on
its performance.
Related papers
- Analyzing and Enhancing the Backward-Pass Convergence of Unrolled
Optimization [50.38518771642365]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
A central challenge in this setting is backpropagation through the solution of an optimization problem, which often lacks a closed form.
This paper provides theoretical insights into the backward pass of unrolled optimization, showing that it is equivalent to the solution of a linear system by a particular iterative method.
A system called Folded Optimization is proposed to construct more efficient backpropagation rules from unrolled solver implementations.
arXiv Detail & Related papers (2023-12-28T23:15:18Z) - Federated Conditional Stochastic Optimization [110.513884892319]
Conditional optimization has found in a wide range of machine learning tasks, such as in-variant learning tasks, AUPRC, andAML.
This paper proposes algorithms for distributed federated learning.
arXiv Detail & Related papers (2023-10-04T01:47:37Z) - Robust expected improvement for Bayesian optimization [1.8130068086063336]
We propose a surrogate modeling and active learning technique called robust expected improvement (REI) that ports adversarial methodology into the BO/GP framework.
We illustrate and draw comparisons to several competitors on benchmark synthetic exercises and real problems of varying complexity.
arXiv Detail & Related papers (2023-02-16T22:34:28Z) - Backpropagation of Unrolled Solvers with Folded Optimization [55.04219793298687]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
One typical strategy is algorithm unrolling, which relies on automatic differentiation through the operations of an iterative solver.
This paper provides theoretical insights into the backward pass of unrolled optimization, leading to a system for generating efficiently solvable analytical models of backpropagation.
arXiv Detail & Related papers (2023-01-28T01:50:42Z) - Optimizer Amalgamation [124.33523126363728]
We are motivated to study a new problem named Amalgamation: how can we best combine a pool of "teacher" amalgamations into a single "student" that can have stronger problem-specific performance?
First, we define three differentiable mechanisms to amalgamate a pool of analyticals by gradient descent.
In order to reduce variance of the process, we also explore methods to stabilize the process by perturbing the target.
arXiv Detail & Related papers (2022-03-12T16:07:57Z) - Teaching Networks to Solve Optimization Problems [13.803078209630444]
We propose to replace the iterative solvers altogether with a trainable parametric set function.
We show the feasibility of learning such parametric (set) functions to solve various classic optimization problems.
arXiv Detail & Related papers (2022-02-08T19:13:13Z) - A unified surrogate-based scheme for black-box and preference-based
optimization [2.561649173827544]
We show that black-box and preference-based optimization problems are closely related and can be solved using the same family of approaches.
We propose the generalized Metric Response Surface (gMRS) algorithm, an optimization scheme that is a generalization of the popular MSRS framework.
arXiv Detail & Related papers (2022-02-03T08:47:54Z) - Outlier-Robust Sparse Estimation via Non-Convex Optimization [73.18654719887205]
We explore the connection between high-dimensional statistics and non-robust optimization in the presence of sparsity constraints.
We develop novel and simple optimization formulations for these problems.
As a corollary, we obtain that any first-order method that efficiently converges to station yields an efficient algorithm for these tasks.
arXiv Detail & Related papers (2021-09-23T17:38:24Z) - Solving Inverse Problems by Joint Posterior Maximization with
Autoencoding Prior [0.0]
We address the problem of solving ill-posed inverse problems in imaging where the prior is a JPal autoencoder (VAE)
We show that our technique is quite sufficient that it satisfies the proposed objective function.
Results also show the robustness of our approach to provide more robust estimates.
arXiv Detail & Related papers (2021-03-02T11:18:34Z) - Divide and Learn: A Divide and Conquer Approach for Predict+Optimize [50.03608569227359]
The predict+optimize problem combines machine learning ofproblem coefficients with a optimization prob-lem that uses the predicted coefficients.
We show how to directlyexpress the loss of the optimization problem in terms of thepredicted coefficients as a piece-wise linear function.
We propose a novel divide and algorithm to tackle optimization problems without this restriction and predict itscoefficients using the optimization loss.
arXiv Detail & Related papers (2020-12-04T00:26:56Z) - Efficient Consensus Model based on Proximal Gradient Method applied to
Convolutional Sparse Problems [2.335152769484957]
We derive and detail a theoretical analysis of an efficient consensus algorithm based on gradient proximal (PG) approach.
The proposed algorithm is also applied to another particular convolutional problem for the anomaly detection task.
arXiv Detail & Related papers (2020-11-19T20:52:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.