Fighting the curse of dimensionality: A machine learning approach to
finding global optima
- URL: http://arxiv.org/abs/2110.14985v1
- Date: Thu, 28 Oct 2021 09:50:29 GMT
- Title: Fighting the curse of dimensionality: A machine learning approach to
finding global optima
- Authors: Julian F. Schumann, Alejandro M. Arag\'on
- Abstract summary: This paper shows how to find global optima in structural optimization problems.
By exploiting certain cost functions we either obtain the global at best or obtain superior results at worst when compared to established optimization procedures.
- Score: 77.34726150561087
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Finding global optima in high-dimensional optimization problems is extremely
challenging since the number of function evaluations required to sufficiently
explore the design space increases exponentially with its dimensionality.
Furthermore, non-convex cost functions render local gradient-based search
techniques ineffective. To overcome these difficulties, here we demonstrate the
use of machine learning to find global minima, whereby autoencoders are used to
drastically reduce the search space dimensionality, and optima are found by
surveying the lower-dimensional latent spaces. The methodology is tested on
benchmark functions and on a structural optimization problem, where we show
that by exploiting the behavior of certain cost functions we either obtain the
global optimum at best or obtain superior results at worst when compared to
established optimization procedures.
Related papers
- Super Gradient Descent: Global Optimization requires Global Gradient [0.0]
This article introduces a novel optimization method that guarantees convergence to the global minimum for any k-Lipschitz function defined on a closed interval.
Our approach addresses the limitations of traditional optimization algorithms, which often get trapped in local minima.
arXiv Detail & Related papers (2024-10-25T17:28:39Z) - Discovering Preference Optimization Algorithms with and for Large Language Models [50.843710797024805]
offline preference optimization is a key method for enhancing and controlling the quality of Large Language Model (LLM) outputs.
We perform objective discovery to automatically discover new state-of-the-art preference optimization algorithms without (expert) human intervention.
Experiments demonstrate the state-of-the-art performance of DiscoPOP, a novel algorithm that adaptively blends logistic and exponential losses.
arXiv Detail & Related papers (2024-06-12T16:58:41Z) - Robust Entropy Search for Safe Efficient Bayesian Optimization [40.56709991743249]
We develop an efficient information-based acquisition function that we call Robust Entropy Search (RES)
RES reliably finds robust optima, outperforming state-of-the-art algorithms.
arXiv Detail & Related papers (2024-05-29T13:00:10Z) - Derivative-free tree optimization for complex systems [19.359472033285865]
A tremendous range of design tasks in materials, physics, and biology can be formulated as finding the optimum of an objective function depending on many parameters without knowing their parameters.
Here, we present a tree search method that enables the discovery of high-range derivative design systems.
arXiv Detail & Related papers (2024-04-05T12:37:08Z) - Localized Zeroth-Order Prompt Optimization [54.964765668688806]
We propose a novel algorithm, namely localized zeroth-order prompt optimization (ZOPO)
ZOPO incorporates a Neural Tangent Kernel-based derived Gaussian process into standard zeroth-order optimization for an efficient search of well-performing local optima in prompt optimization.
Remarkably, ZOPO outperforms existing baselines in terms of both the optimization performance and the query efficiency.
arXiv Detail & Related papers (2024-03-05T14:18:15Z) - Efficient Non-Parametric Optimizer Search for Diverse Tasks [93.64739408827604]
We present the first efficient scalable and general framework that can directly search on the tasks of interest.
Inspired by the innate tree structure of the underlying math expressions, we re-arrange the spaces into a super-tree.
We adopt an adaptation of the Monte Carlo method to tree search, equipped with rejection sampling and equivalent- form detection.
arXiv Detail & Related papers (2022-09-27T17:51:31Z) - Bayesian Optimization for auto-tuning GPU kernels [0.0]
Finding optimal parameter configurations for GPU kernels is a non-trivial exercise for large search spaces, even when automated.
We introduce a novel contextual exploration factor as well as new acquisition functions with improved scalability, combined with an informed function selection mechanism.
arXiv Detail & Related papers (2021-11-26T11:26:26Z) - Why Do Local Methods Solve Nonconvex Problems? [54.284687261929115]
Non-used optimization is ubiquitous in modern machine learning.
We rigorously formalize it for instances of machine learning problems.
We hypothesize a unified explanation for this phenomenon.
arXiv Detail & Related papers (2021-03-24T19:34:11Z) - An adaptive stochastic gradient-free approach for high-dimensional
blackbox optimization [0.0]
We propose an adaptive gradient-free (ASGF) approach for high-dimensional non-smoothing problems.
We illustrate the performance of this method on benchmark global problems and learning tasks.
arXiv Detail & Related papers (2020-06-18T22:47:58Z) - Optimizing Wireless Systems Using Unsupervised and
Reinforced-Unsupervised Deep Learning [96.01176486957226]
Resource allocation and transceivers in wireless networks are usually designed by solving optimization problems.
In this article, we introduce unsupervised and reinforced-unsupervised learning frameworks for solving both variable and functional optimization problems.
arXiv Detail & Related papers (2020-01-03T11:01:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.