Primal-Dual Sequential Subspace Optimization for Saddle-point Problems
- URL: http://arxiv.org/abs/2008.09149v1
- Date: Thu, 20 Aug 2020 18:19:19 GMT
- Title: Primal-Dual Sequential Subspace Optimization for Saddle-point Problems
- Authors: Yoni Choukroun, Michael Zibulevsky, Pavel Kisilev
- Abstract summary: We introduce a new sequential subspace optimization method for large-scale saddle-point problems.
It solves auxiliary saddle-point problems in low-dimensional subspaces, spanned by directions derived from first-order information over the primal emphand dual variables.
Experimental results demonstrate significantly better convergence relative to popular first-order methods.
- Score: 3.9582154141918964
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce a new sequential subspace optimization method for large-scale
saddle-point problems. It solves iteratively a sequence of auxiliary
saddle-point problems in low-dimensional subspaces, spanned by directions
derived from first-order information over the primal \emph{and} dual variables.
Proximal regularization is further deployed to stabilize the optimization
process. Experimental results demonstrate significantly better convergence
relative to popular first-order methods. We analyze the influence of the
subspace on the convergence of the algorithm, and assess its performance in
various deterministic optimization scenarios, such as bi-linear games,
ADMM-based constrained optimization and generative adversarial networks.
Related papers
- Differentially Private Optimization with Sparse Gradients [60.853074897282625]
We study differentially private (DP) optimization problems under sparsity of individual gradients.
Building on this, we obtain pure- and approximate-DP algorithms with almost optimal rates for convex optimization with sparse gradients.
arXiv Detail & Related papers (2024-04-16T20:01:10Z) - An Adaptive Dimension Reduction Estimation Method for High-dimensional
Bayesian Optimization [6.79843988450982]
We propose a two-step optimization framework to extend BO to high-dimensional settings.
Our algorithm offers the flexibility to operate these steps either concurrently or in sequence.
Numerical experiments validate the efficacy of our method in challenging scenarios.
arXiv Detail & Related papers (2024-03-08T16:21:08Z) - Faster Riemannian Newton-type Optimization by Subsampling and Cubic
Regularization [3.867143522757309]
This work is on constrained large-scale non-constrained optimization where the constraint set implies a manifold structure.
We propose a new second-order saddleian optimization algorithm, aiming at improving convergence and reducing computational cost.
arXiv Detail & Related papers (2023-02-22T00:37:44Z) - Adaptive Stochastic Optimisation of Nonconvex Composite Objectives [2.1700203922407493]
We propose and analyse a family of generalised composite mirror descent algorithms.
With adaptive step sizes, the proposed algorithms converge without requiring prior knowledge of the problem.
We exploit the low-dimensional structure of the decision sets for high-dimensional problems.
arXiv Detail & Related papers (2022-11-21T18:31:43Z) - Two-Stage Stochastic Optimization via Primal-Dual Decomposition and Deep
Unrolling [86.85697555068168]
Two-stage algorithmic optimization plays a critical role in various engineering and scientific applications.
There still lack efficient algorithms, especially when the long-term and short-term variables are coupled in the constraints.
We show that PDD-SSCA can achieve superior performance over existing solutions.
arXiv Detail & Related papers (2021-05-05T03:36:00Z) - Unified Convergence Analysis for Adaptive Optimization with Moving Average Estimator [75.05106948314956]
We show that an increasing large momentum parameter for the first-order moment is sufficient for adaptive scaling.
We also give insights for increasing the momentum in a stagewise manner in accordance with stagewise decreasing step size.
arXiv Detail & Related papers (2021-04-30T08:50:24Z) - Sequential Subspace Search for Functional Bayesian Optimization
Incorporating Experimenter Intuition [63.011641517977644]
Our algorithm generates a sequence of finite-dimensional random subspaces of functional space spanned by a set of draws from the experimenter's Gaussian Process.
Standard Bayesian optimisation is applied on each subspace, and the best solution found used as a starting point (origin) for the next subspace.
We test our algorithm in simulated and real-world experiments, namely blind function matching, finding the optimal precipitation-strengthening function for an aluminium alloy, and learning rate schedule optimisation for deep networks.
arXiv Detail & Related papers (2020-09-08T06:54:11Z) - Convergence of adaptive algorithms for weakly convex constrained
optimization [59.36386973876765]
We prove the $mathcaltilde O(t-1/4)$ rate of convergence for the norm of the gradient of Moreau envelope.
Our analysis works with mini-batch size of $1$, constant first and second order moment parameters, and possibly smooth optimization domains.
arXiv Detail & Related papers (2020-06-11T17:43:19Z) - Effective Dimension Adaptive Sketching Methods for Faster Regularized
Least-Squares Optimization [56.05635751529922]
We propose a new randomized algorithm for solving L2-regularized least-squares problems based on sketching.
We consider two of the most popular random embeddings, namely, Gaussian embeddings and the Subsampled Randomized Hadamard Transform (SRHT)
arXiv Detail & Related papers (2020-06-10T15:00:09Z) - Adaptive First-and Zeroth-order Methods for Weakly Convex Stochastic
Optimization Problems [12.010310883787911]
We analyze a new family of adaptive subgradient methods for solving an important class of weakly convex (possibly nonsmooth) optimization problems.
Experimental results indicate how the proposed algorithms empirically outperform its zerothorder gradient descent and its design variant.
arXiv Detail & Related papers (2020-05-19T07:44:52Z) - A Riemannian Primal-dual Algorithm Based on Proximal Operator and its
Application in Metric Learning [3.511851311025242]
We propose a primal-dual algorithm to optimize the primal and dual variables iteratively.
We prove convergence of the proposed algorithm and show its non-asymptotic convergence rate.
Preliminary experimental results on an optimal fund selection problem in fund of funds management showed its efficacy.
arXiv Detail & Related papers (2020-05-19T03:31:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.