A New One-Point Residual-Feedback Oracle For Black-Box Learning and
Control
- URL: http://arxiv.org/abs/2006.10820v2
- Date: Wed, 8 Sep 2021 01:25:08 GMT
- Title: A New One-Point Residual-Feedback Oracle For Black-Box Learning and
Control
- Authors: Yan Zhang, Yi Zhou, Kaiyi Ji, Michael M. Zavlanos
- Abstract summary: We propose a novel one-point feedback scheme that queries the function value once at each iteration and estimates the gradient using the residual between two consecutive points.
We show that the proposed algorithm achieves the same convergence rate as that of two-point scheme with uncontrollable data samples.
- Score: 28.679167097106813
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Zeroth-order optimization (ZO) algorithms have been recently used to solve
black-box or simulation-based learning and control problems, where the gradient
of the objective function cannot be easily computed but can be approximated
using the objective function values. Many existing ZO algorithms adopt
two-point feedback schemes due to their fast convergence rate compared to
one-point feedback schemes. However, two-point schemes require two evaluations
of the objective function at each iteration, which can be impractical in
applications where the data are not all available a priori, e.g., in online
optimization. In this paper, we propose a novel one-point feedback scheme that
queries the function value once at each iteration and estimates the gradient
using the residual between two consecutive points. When optimizing a
deterministic Lipschitz function, we show that the query complexity of ZO with
the proposed one-point residual feedback matches that of ZO with the existing
two-point schemes. Moreover, the query complexity of the proposed algorithm can
be improved when the objective function has Lipschitz gradient. Then, for
stochastic bandit optimization problems where only noisy objective function
values are given, we show that ZO with one-point residual feedback achieves the
same convergence rate as that of two-point scheme with uncontrollable data
samples. We demonstrate the effectiveness of the proposed one-point residual
feedback via extensive numerical experiments.
Related papers
- Accelerated zero-order SGD under high-order smoothness and overparameterized regime [79.85163929026146]
We present a novel gradient-free algorithm to solve convex optimization problems.
Such problems are encountered in medicine, physics, and machine learning.
We provide convergence guarantees for the proposed algorithm under both types of noise.
arXiv Detail & Related papers (2024-11-21T10:26:17Z) - Stochastic Zeroth-Order Optimization under Strongly Convexity and Lipschitz Hessian: Minimax Sample Complexity [59.75300530380427]
We consider the problem of optimizing second-order smooth and strongly convex functions where the algorithm is only accessible to noisy evaluations of the objective function it queries.
We provide the first tight characterization for the rate of the minimax simple regret by developing matching upper and lower bounds.
arXiv Detail & Related papers (2024-06-28T02:56:22Z) - Faster Convergence with Multiway Preferences [99.68922143784306]
We consider the sign-function-based comparison feedback model and analyze the convergence rates with batched and multiway comparisons.
Our work is the first to study the problem of convex optimization with multiway preferences and analyze the optimal convergence rates.
arXiv Detail & Related papers (2023-12-19T01:52:13Z) - Practical First-Order Bayesian Optimization Algorithms [0.0]
We propose a class of practical FOBO algorithms that efficiently utilize the information from the gradient GP to identify potential query points with zero gradients.
We validate the performance of our proposed algorithms on several test functions and show that our algorithms outperform state-of-the-art FOBO algorithms.
arXiv Detail & Related papers (2023-06-19T10:05:41Z) - Global and Preference-based Optimization with Mixed Variables using Piecewise Affine Surrogates [0.6083861980670925]
This paper proposes a novel surrogate-based global optimization algorithm to solve linearly constrained mixed-variable problems.
We assume the objective function is black-box and expensive-to-evaluate, while the linear constraints are quantifiable unrelaxable a priori known.
We introduce two types of exploration functions to efficiently search the feasible domain via mixed-integer linear programming solvers.
arXiv Detail & Related papers (2023-02-09T15:04:35Z) - Efficient Neural Network Analysis with Sum-of-Infeasibilities [64.31536828511021]
Inspired by sum-of-infeasibilities methods in convex optimization, we propose a novel procedure for analyzing verification queries on networks with extensive branching functions.
An extension to a canonical case-analysis-based complete search procedure can be achieved by replacing the convex procedure executed at each search state with DeepSoI.
arXiv Detail & Related papers (2022-03-19T15:05:09Z) - Zeroth-Order Hybrid Gradient Descent: Towards A Principled Black-Box
Optimization Framework [100.36569795440889]
This work is on the iteration of zero-th-order (ZO) optimization which does not require first-order information.
We show that with a graceful design in coordinate importance sampling, the proposed ZO optimization method is efficient both in terms of complexity as well as as function query cost.
arXiv Detail & Related papers (2020-12-21T17:29:58Z) - Stochastic Compositional Gradient Descent under Compositional
constraints [13.170519806372075]
We study constrained optimization problems where the objective and constraint functions are convex and expressed as compositions of functions.
The problem arises in fair classification/regression and in the design of queuing systems.
We prove that the proposed algorithm is guaranteed to find the optimal and feasible solution almost surely.
arXiv Detail & Related papers (2020-12-17T05:38:37Z) - Boosting One-Point Derivative-Free Online Optimization via Residual
Feedback [28.679167097106813]
We propose a new one-point feedback method for online optimization.
We show that using residual feedback can produce much smaller variance compared to conventional one-point feedback methods.
Our regret bounds are tighter compared to existing regret bounds for ZO with conventional one-point feedback.
arXiv Detail & Related papers (2020-10-14T19:52:25Z) - Projection-Free Adaptive Gradients for Large-Scale Optimization [22.0439695290991]
Frank-Wolfe algorithms occupy a unique position as they alleviate both computational burdens by querying only approximate first-order information from the objective.
We show that our method can improve the performance of adaptive algorithms for constrained optimization.
arXiv Detail & Related papers (2020-09-29T15:56:12Z) - Single-Timescale Stochastic Nonconvex-Concave Optimization for Smooth
Nonlinear TD Learning [145.54544979467872]
We propose two single-timescale single-loop algorithms that require only one data point each step.
Our results are expressed in a form of simultaneous primal and dual side convergence.
arXiv Detail & Related papers (2020-08-23T20:36:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.