An adaptive stochastic gradient-free approach for high-dimensional
blackbox optimization
- URL: http://arxiv.org/abs/2006.10887v2
- Date: Sat, 15 Jan 2022 19:24:40 GMT
- Title: An adaptive stochastic gradient-free approach for high-dimensional
blackbox optimization
- Authors: Anton Dereventsov, Clayton G. Webster, Joseph D. Daws Jr
- Abstract summary: We propose an adaptive gradient-free (ASGF) approach for high-dimensional non-smoothing problems.
We illustrate the performance of this method on benchmark global problems and learning tasks.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we propose a novel adaptive stochastic gradient-free (ASGF)
approach for solving high-dimensional nonconvex optimization problems based on
function evaluations. We employ a directional Gaussian smoothing of the target
function that generates a surrogate of the gradient and assists in avoiding bad
local optima by utilizing nonlocal information of the loss landscape. Applying
a deterministic quadrature scheme results in a massively scalable technique
that is sample-efficient and achieves spectral accuracy. At each step we
randomly generate the search directions while primarily following the surrogate
of the smoothed gradient. This enables exploitation of the gradient direction
while maintaining sufficient space exploration, and accelerates convergence
towards the global extrema. In addition, we make use of a local approximation
of the Lipschitz constant in order to adaptively adjust the values of all
hyperparameters, thus removing the careful fine-tuning of current algorithms
that is often necessary to be successful when applied to a large class of
learning tasks. As such, the ASGF strategy offers significant improvements when
solving high-dimensional nonconvex optimization problems when compared to other
gradient-free methods (including the so called "evolutionary strategies") as
well as iterative approaches that rely on the gradient information of the
objective function. We illustrate the improved performance of this method by
providing several comparative numerical studies on benchmark global
optimization problems and reinforcement learning tasks.
Related papers
- Gradient-Variation Online Learning under Generalized Smoothness [56.38427425920781]
gradient-variation online learning aims to achieve regret guarantees that scale with variations in gradients of online functions.
Recent efforts in neural network optimization suggest a generalized smoothness condition, allowing smoothness to correlate with gradient norms.
We provide the applications for fast-rate convergence in games and extended adversarial optimization.
arXiv Detail & Related papers (2024-08-17T02:22:08Z) - Using Stochastic Gradient Descent to Smooth Nonconvex Functions: Analysis of Implicit Graduated Optimization [0.6906005491572401]
We show that noise in batch descent gradient (SGD) has the effect of smoothing objective function.
We analyze a new graduated optimization algorithm that varies the degree of smoothing by learning rate and batch size.
arXiv Detail & Related papers (2023-11-15T07:27:40Z) - ProGO: Probabilistic Global Optimizer [9.772380490791635]
In this paper we develop an algorithm that converges to the global optima under some mild conditions.
We show that the proposed algorithm outperforms, by order of magnitude, many existing state-of-the-art methods.
arXiv Detail & Related papers (2023-10-04T22:23:40Z) - Efficient Gradient Approximation Method for Constrained Bilevel
Optimization [2.0305676256390934]
Bilevel optimization has been developed with large-scale high-dimensional data.
This paper considers a constrained bilevel problem with convex and non-differentiable approximations.
arXiv Detail & Related papers (2023-02-03T19:34:56Z) - A Particle-based Sparse Gaussian Process Optimizer [5.672919245950197]
We present a new swarm-swarm-based framework utilizing the underlying dynamical process of descent.
The biggest advantage of this approach is greater exploration around the current state before deciding descent descent.
arXiv Detail & Related papers (2022-11-26T09:06:15Z) - An Empirical Evaluation of Zeroth-Order Optimization Methods on
AI-driven Molecule Optimization [78.36413169647408]
We study the effectiveness of various ZO optimization methods for optimizing molecular objectives.
We show the advantages of ZO sign-based gradient descent (ZO-signGD)
We demonstrate the potential effectiveness of ZO optimization methods on widely used benchmark tasks from the Guacamol suite.
arXiv Detail & Related papers (2022-10-27T01:58:10Z) - SUPER-ADAM: Faster and Universal Framework of Adaptive Gradients [99.13839450032408]
It is desired to design a universal framework for adaptive algorithms to solve general problems.
In particular, our novel framework provides adaptive methods under non convergence support for setting.
arXiv Detail & Related papers (2021-06-15T15:16:28Z) - Zeroth-Order Hybrid Gradient Descent: Towards A Principled Black-Box
Optimization Framework [100.36569795440889]
This work is on the iteration of zero-th-order (ZO) optimization which does not require first-order information.
We show that with a graceful design in coordinate importance sampling, the proposed ZO optimization method is efficient both in terms of complexity as well as as function query cost.
arXiv Detail & Related papers (2020-12-21T17:29:58Z) - Towards Better Understanding of Adaptive Gradient Algorithms in
Generative Adversarial Nets [71.05306664267832]
Adaptive algorithms perform gradient updates using the history of gradients and are ubiquitous in training deep neural networks.
In this paper we analyze a variant of OptimisticOA algorithm for nonconcave minmax problems.
Our experiments show that adaptive GAN non-adaptive gradient algorithms can be observed empirically.
arXiv Detail & Related papers (2019-12-26T22:10:10Z) - On the Convergence of Adaptive Gradient Methods for Nonconvex Optimization [80.03647903934723]
We prove adaptive gradient methods in expectation of gradient convergence methods.
Our analyses shed light on better adaptive gradient methods in optimizing non understanding gradient bounds.
arXiv Detail & Related papers (2018-08-16T20:25:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.