Dualize, Split, Randomize: Toward Fast Nonsmooth Optimization Algorithms
- URL: http://arxiv.org/abs/2004.02635v4
- Date: Tue, 26 Jul 2022 11:21:34 GMT
- Title: Dualize, Split, Randomize: Toward Fast Nonsmooth Optimization Algorithms
- Authors: Adil Salim, Laurent Condat, Konstantin Mishchenko, Peter Richt\'arik
- Abstract summary: We consider the sum of three convex functions, where the first one F is smooth, the second one is nonsmooth and proximable.
This template problem has many applications, for instance, in image processing and machine learning.
We propose a new primal-dual algorithm, which we call PDDY, for this problem.
- Score: 21.904012114713428
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider minimizing the sum of three convex functions, where the first one
F is smooth, the second one is nonsmooth and proximable and the third one is
the composition of a nonsmooth proximable function with a linear operator L.
This template problem has many applications, for instance, in image processing
and machine learning. First, we propose a new primal-dual algorithm, which we
call PDDY, for this problem. It is constructed by applying Davis-Yin splitting
to a monotone inclusion in a primal-dual product space, where the operators are
monotone under a specific metric depending on L. We show that three existing
algorithms (the two forms of the Condat-Vu algorithm and the PD3O algorithm)
have the same structure, so that PDDY is the fourth missing link in this
self-consistent class of primal-dual algorithms. This representation eases the
convergence analysis: it allows us to derive sublinear convergence rates in
general, and linear convergence results in presence of strong convexity.
Moreover, within our broad and flexible analysis framework, we propose new
stochastic generalizations of the algorithms, in which a variance-reduced
random estimate of the gradient of F is used, instead of the true gradient.
Furthermore, we obtain, as a special case of PDDY, a linearly converging
algorithm for the minimization of a strongly convex function F under a linear
constraint; we discuss its important application to decentralized optimization.
Related papers
- Variable Substitution and Bilinear Programming for Aligning Partially Overlapping Point Sets [48.1015832267945]
This research presents a method to meet requirements through the minimization objective function of the RPM algorithm.
A branch-and-bound (BnB) algorithm is devised, which solely branches over the parameters, thereby boosting convergence rate.
Empirical evaluations demonstrate better robustness of the proposed methodology against non-rigid deformation, positional noise, and outliers, when compared with prevailing state-of-the-art transformations.
arXiv Detail & Related papers (2024-05-14T13:28:57Z) - Linearization Algorithms for Fully Composite Optimization [61.20539085730636]
This paper studies first-order algorithms for solving fully composite optimization problems convex compact sets.
We leverage the structure of the objective by handling differentiable and non-differentiable separately, linearizing only the smooth parts.
arXiv Detail & Related papers (2023-02-24T18:41:48Z) - Fast Computation of Optimal Transport via Entropy-Regularized Extragradient Methods [75.34939761152587]
Efficient computation of the optimal transport distance between two distributions serves as an algorithm that empowers various applications.
This paper develops a scalable first-order optimization-based method that computes optimal transport to within $varepsilon$ additive accuracy.
arXiv Detail & Related papers (2023-01-30T15:46:39Z) - Nearly Optimal Linear Convergence of Stochastic Primal-Dual Methods for
Linear Programming [5.126924253766052]
We show that the proposed method exhibits a linear convergence rate for solving sharp instances with a high probability.
We also propose an efficient coordinate-based oracle for unconstrained bilinear problems.
arXiv Detail & Related papers (2021-11-10T04:56:38Z) - Hybrid Trilinear and Bilinear Programming for Aligning Partially
Overlapping Point Sets [85.71360365315128]
In many applications, we need algorithms which can align partially overlapping point sets are invariant to the corresponding corresponding RPM algorithm.
We first show that the objective is a cubic bound function. We then utilize the convex envelopes of trilinear and bilinear monomial transformations to derive its lower bound.
We next develop a branch-and-bound (BnB) algorithm which only branches over the transformation variables and runs efficiently.
arXiv Detail & Related papers (2021-01-19T04:24:23Z) - Single-Timescale Stochastic Nonconvex-Concave Optimization for Smooth
Nonlinear TD Learning [145.54544979467872]
We propose two single-timescale single-loop algorithms that require only one data point each step.
Our results are expressed in a form of simultaneous primal and dual side convergence.
arXiv Detail & Related papers (2020-08-23T20:36:49Z) - A New Randomized Primal-Dual Algorithm for Convex Optimization with
Optimal Last Iterate Rates [16.54912614895861]
We develop a novel unified randomized block-coordinate primal-dual algorithm to solve a class of nonsmooth constrained convex optimization problems.
We prove that our algorithm achieves optimal convergence rates in two cases: general convexity and strong convexity.
Our results show that the proposed method has encouraging performance on different experiments.
arXiv Detail & Related papers (2020-03-03T03:59:26Z) - Optimal Randomized First-Order Methods for Least-Squares Problems [56.05635751529922]
This class of algorithms encompasses several randomized methods among the fastest solvers for least-squares problems.
We focus on two classical embeddings, namely, Gaussian projections and subsampled Hadamard transforms.
Our resulting algorithm yields the best complexity known for solving least-squares problems with no condition number dependence.
arXiv Detail & Related papers (2020-02-21T17:45:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.