Spingarn's Method and Progressive Decoupling Beyond Elicitable Monotonicity
- URL: http://arxiv.org/abs/2504.00836v1
- Date: Tue, 01 Apr 2025 14:26:01 GMT
- Title: Spingarn's Method and Progressive Decoupling Beyond Elicitable Monotonicity
- Authors: Brecht Evens, Puya Latafat, Panagiotis Patrinos,
- Abstract summary: We develop a general local convergence analysis in a certain nonmonotone setting.<n>We prove convergence under conditions that link the relaxation parameters to the nonmonotonicity of their respective subspaces.
- Score: 4.307128674848627
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Spingarn's method of partial inverses and the progressive decoupling algorithm address inclusion problems involving the sum of an operator and the normal cone of a linear subspace, known as linkage problems. Despite their success, existing convergence results are limited to the so-called elicitable monotone setting, where nonmonotonicity is allowed only on the orthogonal complement of the linkage subspace. In this paper, we introduce progressive decoupling+, a generalized version of standard progressive decoupling that incorporates separate relaxation parameters for the linkage subspace and its orthogonal complement. We prove convergence under conditions that link the relaxation parameters to the nonmonotonicity of their respective subspaces and show that the special cases of Spingarn's method and standard progressive decoupling also extend beyond the elicitable monotone setting. Our analysis hinges upon an equivalence between progressive decoupling+ and the preconditioned proximal point algorithm, for which we develop a general local convergence analysis in a certain nonmonotone setting.
Related papers
- Stable Nonconvex-Nonconcave Training via Linear Interpolation [51.668052890249726]
This paper presents a theoretical analysis of linearahead as a principled method for stabilizing (large-scale) neural network training.
We argue that instabilities in the optimization process are often caused by the nonmonotonicity of the loss landscape and show how linear can help by leveraging the theory of nonexpansive operators.
arXiv Detail & Related papers (2023-10-20T12:45:12Z) - Curvature-Independent Last-Iterate Convergence for Games on Riemannian
Manifolds [77.4346324549323]
We show that a step size agnostic to the curvature of the manifold achieves a curvature-independent and linear last-iterate convergence rate.
To the best of our knowledge, the possibility of curvature-independent rates and/or last-iterate convergence has not been considered before.
arXiv Detail & Related papers (2023-06-29T01:20:44Z) - The rate of convergence of Bregman proximal methods: Local geometry vs. regularity vs. sharpness [29.642830843568525]
We show that the convergence rate of a given method depends sharply on its associated Legendre exponent.<n>In particular, we show that boundary solutions exhibit a stark separation between methods with a zero and non-zero Legendre exponent.
arXiv Detail & Related papers (2022-11-15T10:49:04Z) - Semi-Supervised Subspace Clustering via Tensor Low-Rank Representation [64.49871502193477]
We propose a novel semi-supervised subspace clustering method, which is able to simultaneously augment the initial supervisory information and construct a discriminative affinity matrix.
Comprehensive experimental results on six commonly-used benchmark datasets demonstrate the superiority of our method over state-of-the-art methods.
arXiv Detail & Related papers (2022-05-21T01:47:17Z) - Polynomial convergence of iterations of certain random operators in
Hilbert space [2.732936573198251]
We study the convergence of random iterative sequence of a family of operators on infinite dimensional spaces inspired bySGD algorithm.
We demonstrate that its convergence rate depends on the initial state, while randomness plays a role only in the choice of the best constant factor.
arXiv Detail & Related papers (2022-02-04T17:48:29Z) - On Asymptotic Linear Convergence of Projected Gradient Descent for
Constrained Least Squares [22.851500417035947]
This manuscript presents a unified framework for the analysis of projected gradient descent in the context of constrained least squares.
We present a recipe for the convergence analysis of PGD and demonstrate it via a beginning-to-end application of the recipe on four fundamental problems.
arXiv Detail & Related papers (2021-12-22T09:49:51Z) - On the Convergence of Stochastic Extragradient for Bilinear Games with
Restarted Iteration Averaging [96.13485146617322]
We present an analysis of the ExtraGradient (SEG) method with constant step size, and present variations of the method that yield favorable convergence.
We prove that when augmented with averaging, SEG provably converges to the Nash equilibrium, and such a rate is provably accelerated by incorporating a scheduled restarting procedure.
arXiv Detail & Related papers (2021-06-30T17:51:36Z) - Towards Understanding Generalization via Decomposing Excess Risk
Dynamics [13.4379473119565]
We analyze the generalization dynamics to derive algorithm-dependent bounds, e.g., stability.
Inspired by the observation that neural networks show a slow convergence rate when fitting noise, we propose decomposing the excess risk dynamics.
Under the decomposition framework, the new bound accords better with the theoretical and empirical evidence compared to the stability-based bound and uniform convergence bound.
arXiv Detail & Related papers (2021-06-11T03:42:45Z) - Random extrapolation for primal-dual coordinate descent [61.55967255151027]
We introduce a randomly extrapolated primal-dual coordinate descent method that adapts to sparsity of the data matrix and the favorable structures of the objective function.
We show almost sure convergence of the sequence and optimal sublinear convergence rates for the primal-dual gap and objective values, in the general convex-concave case.
arXiv Detail & Related papers (2020-07-13T17:39:35Z) - A Relaxed Inertial Forward-Backward-Forward Algorithm for Solving
Monotone Inclusions with Application to GANs [0.0]
We introduce a relaxed inertial forward-backward-forward (RIFBF) splitting algorithm for approaching the set of zeros of the sum of a maximally monotone operator and a single-valued monotone and Lipschitz continuous operator.
We illustrate the proposed method by applications to a bilinear saddle point problem, in the context of which we also emphasize the interplay between the inertial and the relaxation parameters.
arXiv Detail & Related papers (2020-03-17T18:52:54Z) - Operator-algebraic renormalization and wavelets [62.997667081978825]
We construct the continuum free field as the scaling limit of Hamiltonian lattice systems using wavelet theory.
A renormalization group step is determined by the scaling equation identifying lattice observables with the continuum field smeared by compactly supported wavelets.
arXiv Detail & Related papers (2020-02-04T18:04:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.