Dueling Convex Optimization with General Preferences
- URL: http://arxiv.org/abs/2210.02562v1
- Date: Tue, 27 Sep 2022 11:10:41 GMT
- Title: Dueling Convex Optimization with General Preferences
- Authors: Aadirupa Saha, Tomer Koren, Yishay Mansour
- Abstract summary: We address the problem of emph optimization with dueling feedback, where the goal is to minimize a convex function given a weaker form of emphilonling feedback.
Our main contribution is an efficient algorithm with convergence $smashwidetilde O(epsilon-4p)$ for a smooth convex objective function, and an efficient $smashwidetilde O(epsilon-2p) when the objective is smooth and convex.
- Score: 85.14061196945599
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We address the problem of \emph{convex optimization with dueling feedback},
where the goal is to minimize a convex function given a weaker form of
\emph{dueling} feedback. Each query consists of two points and the dueling
feedback returns a (noisy) single-bit binary comparison of the function values
of the two queried points. The translation of the function values to the single
comparison bit is through a \emph{transfer function}. This problem has been
addressed previously for some restricted classes of transfer functions, but
here we consider a very general transfer function class which includes all
functions that can be approximated by a finite polynomial with a minimal degree
$p$. Our main contribution is an efficient algorithm with convergence rate of
$\smash{\widetilde O}(\epsilon^{-4p})$ for a smooth convex objective function,
and an optimal rate of $\smash{\widetilde O}(\epsilon^{-2p})$ when the
objective is smooth and strongly convex.
Related papers
- Two-Timescale Gradient Descent Ascent Algorithms for Nonconvex Minimax Optimization [77.3396841985172]
We provide a unified analysis of two-timescale gradient ascent (TTGDA) for solving structured non minimax optimization problems.
Our contribution is to design TTGDA algorithms are effective beyond the setting.
arXiv Detail & Related papers (2024-08-21T20:14:54Z) - Faster Convergence with Multiway Preferences [99.68922143784306]
We consider the sign-function-based comparison feedback model and analyze the convergence rates with batched and multiway comparisons.
Our work is the first to study the problem of convex optimization with multiway preferences and analyze the optimal convergence rates.
arXiv Detail & Related papers (2023-12-19T01:52:13Z) - ReSQueing Parallel and Private Stochastic Convex Optimization [59.53297063174519]
We introduce a new tool for BFG convex optimization (SCO): a Reweighted Query (ReSQue) estimator for the gradient of a function convolved with a (Gaussian) probability density.
We develop algorithms achieving state-of-the-art complexities for SCO in parallel and private settings.
arXiv Detail & Related papers (2023-01-01T18:51:29Z) - Fast Stochastic Composite Minimization and an Accelerated Frank-Wolfe
Algorithm under Parallelization [9.166498349629157]
We consider the problem of minimizing the sum of two convex functions.
One has Lipschitz-continuous gradients, and can be accessed via oracles, whereas the other is "simple"
We show that one can achieve an $epsilon$ primaldual gap (in expectation) in $tildeO (1/ sqrtepsilon)$ iterations.
arXiv Detail & Related papers (2022-05-25T13:01:09Z) - Optimal Algorithms for Stochastic Multi-Level Compositional Optimization [46.77664277596764]
We solve the problem of multi-level compositional optimization, where the objective function is a limitation of multiple but possibly non-optimal functions.
To make use of the Adaptive Multi-level Variance Reduction method (SMVR), we also achieves the same optimal complexities but converges faster in practice.
arXiv Detail & Related papers (2022-02-15T16:02:32Z) - Submodular + Concave [53.208470310734825]
It has been well established that first order optimization methods can converge to the maximal objective value of concave functions.
In this work, we initiate the determinant of the smooth functions convex body $$F(x) = G(x) +C(x)$.
This class of functions is an extension of both concave and continuous DR-submodular functions for which no guarantee is known.
arXiv Detail & Related papers (2021-06-09T01:59:55Z) - Parameter-free Stochastic Optimization of Variationally Coherent
Functions [19.468067110814808]
We design and analyze an algorithm for first-order optimization of a class of functions on $mathbbRdilon.
It is the first to achieve both these at the same time.
arXiv Detail & Related papers (2021-01-30T15:05:34Z) - Finding Global Minima via Kernel Approximations [90.42048080064849]
We consider the global minimization of smooth functions based solely on function evaluations.
In this paper, we consider an approach that jointly models the function to approximate and finds a global minimum.
arXiv Detail & Related papers (2020-12-22T12:59:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.