A Parameter-free Algorithm for Convex-concave Min-max Problems
- URL: http://arxiv.org/abs/2103.00284v1
- Date: Sat, 27 Feb 2021 18:10:06 GMT
- Title: A Parameter-free Algorithm for Convex-concave Min-max Problems
- Authors: Mingrui Liu, Francesco Orabona
- Abstract summary: cave-free optimization algorithms refer to algorithms whose convergence rate is optimal with respect to the initial point without any learning rate to tune.
As a by-product, we utilize the parameter-free algorithm to design a new algorithm, which obtains fast rates for min-max problems with a growth condition.
- Score: 33.39558541452866
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Parameter-free optimization algorithms refer to algorithms whose convergence
rate is optimal with respect to the initial point without any learning rate to
tune. They are proposed and well-studied in the online convex optimization
literature. However, all the existing parameter-free algorithms can only be
used for convex minimization problems. It remains unclear how to design a
parameter-free algorithm for convex-concave min-max problems. In fact, the best
known convergence rates of the algorithms for solving these problems depend on
the size of the domain, rather than on the distance between initial point and
the optimal solution. In this paper, we provide the first parameter-free
algorithm for several classes of convex-concave problems and establish
corresponding state-of-the-art convergence rates, including
strictly-convex-strictly-concave min-max problems and min-max problems with
non-Euclidean geometry. As a by-product, we utilize the parameter-free
algorithm as a subroutine to design a new algorithm, which obtains fast rates
for min-max problems with a growth condition. Extensive experiments are
conducted to verify our theoretical findings and demonstrate the effectiveness
of the proposed algorithm.
Related papers
- Accelerated First-Order Optimization under Nonlinear Constraints [73.2273449996098]
We exploit between first-order algorithms for constrained optimization and non-smooth systems to design a new class of accelerated first-order algorithms.
An important property of these algorithms is that constraints are expressed in terms of velocities instead of sparse variables.
arXiv Detail & Related papers (2023-02-01T08:50:48Z) - A proximal-proximal majorization-minimization algorithm for nonconvex
tuning-free robust regression problems [4.261680642170457]
We introduce proximal-proximal majorization-minimization (PPMM) algorithm for non regression problems.
Our proposed algorithm outperforms the existing state-of-the-art algorithms.
arXiv Detail & Related papers (2021-06-25T15:07:13Z) - Recent Theoretical Advances in Non-Convex Optimization [56.88981258425256]
Motivated by recent increased interest in analysis of optimization algorithms for non- optimization in deep networks and other problems in data, we give an overview of recent results of theoretical optimization algorithms for non- optimization.
arXiv Detail & Related papers (2020-12-11T08:28:51Z) - Approximation Algorithms for Sparse Best Rank-1 Approximation to
Higher-Order Tensors [1.827510863075184]
Sparse tensor best rank-1 approximation (BR1Approx) is a sparsity generalization of the dense tensor BR1Approx.
Four approximation algorithms are proposed, which are easily implemented, of low computational complexity.
We provide numerical experiments on synthetic and real data to illustrate the effectiveness of the proposed algorithms.
arXiv Detail & Related papers (2020-12-05T18:13:14Z) - An Asymptotically Optimal Primal-Dual Incremental Algorithm for
Contextual Linear Bandits [129.1029690825929]
We introduce a novel algorithm improving over the state-of-the-art along multiple dimensions.
We establish minimax optimality for any learning horizon in the special case of non-contextual linear bandits.
arXiv Detail & Related papers (2020-10-23T09:12:47Z) - Optimal and Practical Algorithms for Smooth and Strongly Convex
Decentralized Optimization [21.555331273873175]
We consider the task of decentralized minimization of the sum of smooth strongly convex functions stored across the nodes of a network.
We propose two new algorithms for this decentralized optimization problem and equip them with complexity guarantees.
arXiv Detail & Related papers (2020-06-21T11:23:20Z) - Exploiting Higher Order Smoothness in Derivative-free Optimization and
Continuous Bandits [99.70167985955352]
We study the problem of zero-order optimization of a strongly convex function.
We consider a randomized approximation of the projected gradient descent algorithm.
Our results imply that the zero-order algorithm is nearly optimal in terms of sample complexity and the problem parameters.
arXiv Detail & Related papers (2020-06-14T10:42:23Z) - Convergence of adaptive algorithms for weakly convex constrained
optimization [59.36386973876765]
We prove the $mathcaltilde O(t-1/4)$ rate of convergence for the norm of the gradient of Moreau envelope.
Our analysis works with mini-batch size of $1$, constant first and second order moment parameters, and possibly smooth optimization domains.
arXiv Detail & Related papers (2020-06-11T17:43:19Z) - Optimal Randomized First-Order Methods for Least-Squares Problems [56.05635751529922]
This class of algorithms encompasses several randomized methods among the fastest solvers for least-squares problems.
We focus on two classical embeddings, namely, Gaussian projections and subsampled Hadamard transforms.
Our resulting algorithm yields the best complexity known for solving least-squares problems with no condition number dependence.
arXiv Detail & Related papers (2020-02-21T17:45:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.