Zeroth-Order Hybrid Gradient Descent: Towards A Principled Black-Box
Optimization Framework
- URL: http://arxiv.org/abs/2012.11518v1
- Date: Mon, 21 Dec 2020 17:29:58 GMT
- Title: Zeroth-Order Hybrid Gradient Descent: Towards A Principled Black-Box
Optimization Framework
- Authors: Pranay Sharma, Kaidi Xu, Sijia Liu, Pin-Yu Chen, Xue Lin and Pramod K.
Varshney
- Abstract summary: This work is on the iteration of zero-th-order (ZO) optimization which does not require first-order information.
We show that with a graceful design in coordinate importance sampling, the proposed ZO optimization method is efficient both in terms of complexity as well as as function query cost.
- Score: 100.36569795440889
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work, we focus on the study of stochastic zeroth-order (ZO)
optimization which does not require first-order gradient information and uses
only function evaluations. The problem of ZO optimization has emerged in many
recent machine learning applications, where the gradient of the objective
function is either unavailable or difficult to compute. In such cases, we can
approximate the full gradients or stochastic gradients through function value
based gradient estimates. Here, we propose a novel hybrid gradient estimator
(HGE), which takes advantage of the query-efficiency of random gradient
estimates as well as the variance-reduction of coordinate-wise gradient
estimates. We show that with a graceful design in coordinate importance
sampling, the proposed HGE-based ZO optimization method is efficient both in
terms of iteration complexity as well as function query cost. We provide a
thorough theoretical analysis of the convergence of our proposed method for
non-convex, convex, and strongly-convex optimization. We show that the
convergence rate that we derive generalizes the results for some prominent
existing methods in the nonconvex case, and matches the optimal result in the
convex case. We also corroborate the theory with a real-world black-box attack
generation application to demonstrate the empirical advantage of our method
over state-of-the-art ZO optimization approaches.
Related papers
- Stochastic Gradient Descent for Gaussian Processes Done Right [86.83678041846971]
We show that when emphdone right -- by which we mean using specific insights from optimisation and kernel communities -- gradient descent is highly effective.
We introduce a emphstochastic dual descent algorithm, explain its design in an intuitive manner and illustrate the design choices.
Our method places Gaussian process regression on par with state-of-the-art graph neural networks for molecular binding affinity prediction.
arXiv Detail & Related papers (2023-10-31T16:15:13Z) - Efficient Gradient Approximation Method for Constrained Bilevel
Optimization [2.0305676256390934]
Bilevel optimization has been developed with large-scale high-dimensional data.
This paper considers a constrained bilevel problem with convex and non-differentiable approximations.
arXiv Detail & Related papers (2023-02-03T19:34:56Z) - COCO Denoiser: Using Co-Coercivity for Variance Reduction in Stochastic
Convex Optimization [4.970364068620608]
We exploit convexity and L-smoothness to improve the noisy estimates outputted by the gradient oracle.
We show that increasing the number and proximity of the queried points leads to better gradient estimates.
We also apply COCO in vanilla settings by plugging it in existing algorithms, such as SGD, Adam or STRSAGA.
arXiv Detail & Related papers (2021-09-07T17:21:09Z) - Constrained and Composite Optimization via Adaptive Sampling Methods [3.4219044933964944]
The motivation for this paper stems from the desire to develop an adaptive sampling method for solving constrained optimization problems.
The method proposed in this paper is a proximal gradient method that can also be applied to the composite optimization problem min f(x) + h(x), where f is convex (but not necessarily differentiable)
arXiv Detail & Related papers (2020-12-31T02:50:39Z) - Bilevel Optimization: Convergence Analysis and Enhanced Design [63.64636047748605]
Bilevel optimization is a tool for many machine learning problems.
We propose a novel stoc-efficientgradient estimator named stoc-BiO.
arXiv Detail & Related papers (2020-10-15T18:09:48Z) - Projection-Free Adaptive Gradients for Large-Scale Optimization [22.0439695290991]
Frank-Wolfe algorithms occupy a unique position as they alleviate both computational burdens by querying only approximate first-order information from the objective.
We show that our method can improve the performance of adaptive algorithms for constrained optimization.
arXiv Detail & Related papers (2020-09-29T15:56:12Z) - An adaptive stochastic gradient-free approach for high-dimensional
blackbox optimization [0.0]
We propose an adaptive gradient-free (ASGF) approach for high-dimensional non-smoothing problems.
We illustrate the performance of this method on benchmark global problems and learning tasks.
arXiv Detail & Related papers (2020-06-18T22:47:58Z) - A Primer on Zeroth-Order Optimization in Signal Processing and Machine
Learning [95.85269649177336]
ZO optimization iteratively performs three major steps: gradient estimation, descent direction, and solution update.
We demonstrate promising applications of ZO optimization, such as evaluating and generating explanations from black-box deep learning models, and efficient online sensor management.
arXiv Detail & Related papers (2020-06-11T06:50:35Z) - Towards Better Understanding of Adaptive Gradient Algorithms in
Generative Adversarial Nets [71.05306664267832]
Adaptive algorithms perform gradient updates using the history of gradients and are ubiquitous in training deep neural networks.
In this paper we analyze a variant of OptimisticOA algorithm for nonconcave minmax problems.
Our experiments show that adaptive GAN non-adaptive gradient algorithms can be observed empirically.
arXiv Detail & Related papers (2019-12-26T22:10:10Z) - On the Convergence of Adaptive Gradient Methods for Nonconvex Optimization [80.03647903934723]
We prove adaptive gradient methods in expectation of gradient convergence methods.
Our analyses shed light on better adaptive gradient methods in optimizing non understanding gradient bounds.
arXiv Detail & Related papers (2018-08-16T20:25:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.