A Primer on Zeroth-Order Optimization in Signal Processing and Machine
Learning
- URL: http://arxiv.org/abs/2006.06224v2
- Date: Sun, 21 Jun 2020 06:40:13 GMT
- Title: A Primer on Zeroth-Order Optimization in Signal Processing and Machine
Learning
- Authors: Sijia Liu, Pin-Yu Chen, Bhavya Kailkhura, Gaoyuan Zhang, Alfred Hero,
Pramod K. Varshney
- Abstract summary: ZO optimization iteratively performs three major steps: gradient estimation, descent direction, and solution update.
We demonstrate promising applications of ZO optimization, such as evaluating and generating explanations from black-box deep learning models, and efficient online sensor management.
- Score: 95.85269649177336
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Zeroth-order (ZO) optimization is a subset of gradient-free optimization that
emerges in many signal processing and machine learning applications. It is used
for solving optimization problems similarly to gradient-based methods. However,
it does not require the gradient, using only function evaluations.
Specifically, ZO optimization iteratively performs three major steps: gradient
estimation, descent direction computation, and solution update. In this paper,
we provide a comprehensive review of ZO optimization, with an emphasis on
showing the underlying intuition, optimization principles and recent advances
in convergence analysis. Moreover, we demonstrate promising applications of ZO
optimization, such as evaluating robustness and generating explanations from
black-box deep learning models, and efficient online sensor management.
Related papers
- Two Optimizers Are Better Than One: LLM Catalyst Empowers Gradient-Based Optimization for Prompt Tuning [69.95292905263393]
We show that gradient-based optimization and large language models (MsLL) are complementary to each other, suggesting a collaborative optimization approach.
Our code is released at https://www.guozix.com/guozix/LLM-catalyst.
arXiv Detail & Related papers (2024-05-30T06:24:14Z) - An Empirical Evaluation of Zeroth-Order Optimization Methods on
AI-driven Molecule Optimization [78.36413169647408]
We study the effectiveness of various ZO optimization methods for optimizing molecular objectives.
We show the advantages of ZO sign-based gradient descent (ZO-signGD)
We demonstrate the potential effectiveness of ZO optimization methods on widely used benchmark tasks from the Guacamol suite.
arXiv Detail & Related papers (2022-10-27T01:58:10Z) - Learning to Optimize Quasi-Newton Methods [22.504971951262004]
This paper introduces a novel machine learning called LODO, which tries to online meta-learn the best preconditioner during optimization.
Unlike other L2O methods, LODO does not require any meta-training on a training task distribution.
We show that our gradient approximates the inverse Hessian in noisy loss landscapes and is capable of representing a wide range of inverse Hessians.
arXiv Detail & Related papers (2022-10-11T03:47:14Z) - Teaching Networks to Solve Optimization Problems [13.803078209630444]
We propose to replace the iterative solvers altogether with a trainable parametric set function.
We show the feasibility of learning such parametric (set) functions to solve various classic optimization problems.
arXiv Detail & Related papers (2022-02-08T19:13:13Z) - First-Order Methods for Convex Optimization [2.578242050187029]
First-order methods have the potential to provide low accuracy solutions at low computational complexity.
We give complete proofs for various key results, and highlight the unifying aspects of several optimization algorithms.
arXiv Detail & Related papers (2021-01-04T13:03:38Z) - Zeroth-Order Hybrid Gradient Descent: Towards A Principled Black-Box
Optimization Framework [100.36569795440889]
This work is on the iteration of zero-th-order (ZO) optimization which does not require first-order information.
We show that with a graceful design in coordinate importance sampling, the proposed ZO optimization method is efficient both in terms of complexity as well as as function query cost.
arXiv Detail & Related papers (2020-12-21T17:29:58Z) - Bilevel Optimization: Convergence Analysis and Enhanced Design [63.64636047748605]
Bilevel optimization is a tool for many machine learning problems.
We propose a novel stoc-efficientgradient estimator named stoc-BiO.
arXiv Detail & Related papers (2020-10-15T18:09:48Z) - Global Optimization of Gaussian processes [52.77024349608834]
We propose a reduced-space formulation with trained Gaussian processes trained on few data points.
The approach also leads to significantly smaller and computationally cheaper sub solver for lower bounding.
In total, we reduce time convergence by orders of orders of the proposed method.
arXiv Detail & Related papers (2020-05-21T20:59:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.