Online Optimization on Hadamard Manifolds: Curvature Independent Regret Bounds on Horospherically Convex Objectives
- URL: http://arxiv.org/abs/2509.11236v1
- Date: Sun, 14 Sep 2025 12:27:31 GMT
- Title: Online Optimization on Hadamard Manifolds: Curvature Independent Regret Bounds on Horospherically Convex Objectives
- Authors: Emre Sahinoglu, Shahin Shahrampour,
- Abstract summary: We analyze the geodesicity framework (g-ity) for h-in-variants of the curvature manifold.<n>In particular, we investigate the application of a Fr-in-variant with the definite (SPD) with a gradient.
- Score: 9.940555460165545
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study online Riemannian optimization on Hadamard manifolds under the framework of horospherical convexity (h-convexity). Prior work mostly relies on the geodesic convexity (g-convexity), leading to regret bounds scaling poorly with the manifold curvature. To address this limitation, we analyze Riemannian online gradient descent for h-convex and strongly h-convex functions and establish $O(\sqrt{T})$ and $O(\log(T))$ regret guarantees, respectively. These bounds are curvature-independent and match the results in the Euclidean setting. We validate our approach with experiments on the manifold of symmetric positive definite (SPD) matrices equipped with the affine-invariant metric. In particular, we investigate online Tyler's $M$-estimation and online Fr\'echet mean computation, showing the application of h-convexity in practice.
Related papers
- Regularized Online RLHF with Generalized Bilinear Preferences [68.44113000390544]
We consider the problem of contextual online RLHF with general preferences.<n>We adopt the Generalized Bilinear Preference Model to capture preferences via low-rank, skew-symmetric matrices.<n>We prove that the dual gap of the greedy policy is bounded by the square of the estimation error.
arXiv Detail & Related papers (2026-02-26T15:27:53Z) - Graph-based Clustering Revisited: A Relaxation of Kernel $k$-Means Perspective [73.18641268511318]
We propose a graph-based clustering algorithm that only relaxes the orthonormal constraint to derive clustering results.<n>To ensure a doubly constraint into a gradient, we transform the non-negative constraint into a class probability parameter.
arXiv Detail & Related papers (2025-09-23T09:14:39Z) - Decentralized Online Riemannian Optimization Beyond Hadamard Manifolds [9.940555460165545]
We analyze a curvature-aware geodesic step that enables a convergence beyond Hadamard linear distances.<n>We employ gradient, smoothing techniques, and we demonstrate $O(sqrtT)$ regret bound through the same subity analysis.
arXiv Detail & Related papers (2025-09-09T14:14:46Z) - Entropic Mirror Descent for Linear Systems: Polyak's Stepsize and Implicit Bias [55.72269695392027]
This paper focuses on applying entropic mirror descent to solve linear systems.<n>The main challenge for the convergence analysis stems from the unboundedness of the domain.<n>To overcome this without imposing restrictive assumptions, we introduce a variant of Polyak-type stepsizes.
arXiv Detail & Related papers (2025-05-05T12:33:18Z) - Euclidean Distance Matrix Completion via Asymmetric Projected Gradient Descent [13.27202712518471]
This paper proposes and analyzes a gradient-type algorithm based on Burer-Monteiro factorization, called the Asymmetric Descented Gradient (APGD)
arXiv Detail & Related papers (2025-04-28T07:13:23Z) - Stable Nonconvex-Nonconcave Training via Linear Interpolation [51.668052890249726]
This paper presents a theoretical analysis of linearahead as a principled method for stabilizing (large-scale) neural network training.
We argue that instabilities in the optimization process are often caused by the nonmonotonicity of the loss landscape and show how linear can help by leveraging the theory of nonexpansive operators.
arXiv Detail & Related papers (2023-10-20T12:45:12Z) - Curvature-Independent Last-Iterate Convergence for Games on Riemannian
Manifolds [77.4346324549323]
We show that a step size agnostic to the curvature of the manifold achieves a curvature-independent and linear last-iterate convergence rate.
To the best of our knowledge, the possibility of curvature-independent rates and/or last-iterate convergence has not been considered before.
arXiv Detail & Related papers (2023-06-29T01:20:44Z) - Hessian Based Smoothing Splines for Manifold Learning [0.228438857884398]
We propose a multidimensional smoothing spline algorithm in the context of manifold learning.
We generalize the bending energy penalty of thin-plate splines to a quadratic form on the Sobolev space of a flat manifold.
The existence and uniqueness of the solution is shown by applying the theory of reproducing Hilbert spaces.
arXiv Detail & Related papers (2023-02-10T02:49:05Z) - First-Order Algorithms for Min-Max Optimization in Geodesic Metric
Spaces [93.35384756718868]
min-max algorithms have been analyzed in the Euclidean setting.
We prove that the extraiteient (RCEG) method corrected lastrate convergence at a linear rate.
arXiv Detail & Related papers (2022-06-04T18:53:44Z) - Curvature-Dependant Global Convergence Rates for Optimization on
Manifolds of Bounded Geometry [6.85316573653194]
We give curvature-dependant convergence rates for weakly convex functions defined on a manifold of 1-bounded geometry.
We compute these bounds explicitly for some manifold commonly used in the optimization literature.
We present self-contained proofs of fully general bounds on the norm of the differential of the exponential map.
arXiv Detail & Related papers (2020-08-06T08:30:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.