Guaranteed Noisy CP Tensor Recovery via Riemannian Optimization on the Segre Manifold
- URL: http://arxiv.org/abs/2510.00569v1
- Date: Wed, 01 Oct 2025 06:44:52 GMT
- Title: Guaranteed Noisy CP Tensor Recovery via Riemannian Optimization on the Segre Manifold
- Authors: Ke Xu, Yuefeng Han,
- Abstract summary: We exploit the intrinsic geometry of rank-one tensors by casting the recovery task as an optimization problem over the Segre manifold.<n>We prove that RGD converges at a local linear rate, while RGN exhibits an initial local quadratic convergence phase that transitions to a linear rate as the iterates approach the statistical noise floor.
- Score: 9.804487437104289
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recovering a low-CP-rank tensor from noisy linear measurements is a central challenge in high-dimensional data analysis, with applications spanning tensor PCA, tensor regression, and beyond. We exploit the intrinsic geometry of rank-one tensors by casting the recovery task as an optimization problem over the Segre manifold, the smooth Riemannian manifold of rank-one tensors. This geometric viewpoint yields two powerful algorithms: Riemannian Gradient Descent (RGD) and Riemannian Gauss-Newton (RGN), each of which preserves feasibility at every iteration. Under mild noise assumptions, we prove that RGD converges at a local linear rate, while RGN exhibits an initial local quadratic convergence phase that transitions to a linear rate as the iterates approach the statistical noise floor. Extensive synthetic experiments validate these convergence guarantees and demonstrate the practical effectiveness of our methods.
Related papers
- Kernel Regression of Multi-Way Data via Tensor Trains with Hadamard Overparametrization: The Dynamic Graph Flow Case [9.941965164307843]
Kernel Regression via Trains with Hadamard overparametrization (KReTTaH) is a regression-based framework for interpretable multi-way data imputation.<n>KReTTaH consistently outperforms state-of-the-art alternatives.
arXiv Detail & Related papers (2025-09-26T11:00:05Z) - Normalized Iterative Hard Thresholding for Tensor Recovery [7.5277782201584085]
Low-rank recovery builds upon ideas from the theory of compressive sensing.<n>We propose a tensor extension of NIHT, referred to as TNIHT, for the recovery of low-rank tensors.
arXiv Detail & Related papers (2025-07-06T03:36:50Z) - Guaranteed Nonconvex Factorization Approach for Tensor Train Recovery [23.123362765639214]
We provide the first convergence guarantee for the factorization approach.<n>We optimize over the so-called left-orthogonal TT format.<n>We prove that RGD can reliably recover the ground truth at a linear rate.
arXiv Detail & Related papers (2024-01-05T01:17:16Z) - Stable Nonconvex-Nonconcave Training via Linear Interpolation [51.668052890249726]
This paper presents a theoretical analysis of linearahead as a principled method for stabilizing (large-scale) neural network training.
We argue that instabilities in the optimization process are often caused by the nonmonotonicity of the loss landscape and show how linear can help by leveraging the theory of nonexpansive operators.
arXiv Detail & Related papers (2023-10-20T12:45:12Z) - Curvature-Independent Last-Iterate Convergence for Games on Riemannian
Manifolds [77.4346324549323]
We show that a step size agnostic to the curvature of the manifold achieves a curvature-independent and linear last-iterate convergence rate.
To the best of our knowledge, the possibility of curvature-independent rates and/or last-iterate convergence has not been considered before.
arXiv Detail & Related papers (2023-06-29T01:20:44Z) - Tensor-on-Tensor Regression: Riemannian Optimization,
Over-parameterization, Statistical-computational Gap, and Their Interplay [9.427635404752936]
We study the tensor-on-tensor regression, where the goal is to connect tensor responses to tensor covariates with a low Tucker rank parameter/matrix.
We propose two methods to cope with the challenge of unknown rank.
We provide the first convergence guarantee for the general tensor-on-tensor regression.
arXiv Detail & Related papers (2022-06-17T13:15:27Z) - First-Order Algorithms for Min-Max Optimization in Geodesic Metric
Spaces [93.35384756718868]
min-max algorithms have been analyzed in the Euclidean setting.
We prove that the extraiteient (RCEG) method corrected lastrate convergence at a linear rate.
arXiv Detail & Related papers (2022-06-04T18:53:44Z) - On the Convergence of Stochastic Extragradient for Bilinear Games with
Restarted Iteration Averaging [96.13485146617322]
We present an analysis of the ExtraGradient (SEG) method with constant step size, and present variations of the method that yield favorable convergence.
We prove that when augmented with averaging, SEG provably converges to the Nash equilibrium, and such a rate is provably accelerated by incorporating a scheduled restarting procedure.
arXiv Detail & Related papers (2021-06-30T17:51:36Z) - New Riemannian preconditioned algorithms for tensor completion via
polyadic decomposition [10.620193291237262]
These algorithms exploit a non-Euclidean metric on the product space of the factor matrices of the low-rank tensor in the polyadic decomposition form.
We prove that the proposed gradient descent algorithm globally converges to a stationary point of the tensor completion problem.
Numerical results on synthetic and real-world data suggest that the proposed algorithms are more efficient in memory and time compared to state-of-the-art algorithms.
arXiv Detail & Related papers (2021-01-26T22:11:06Z) - Stochastic Optimization with Heavy-Tailed Noise via Accelerated Gradient
Clipping [69.9674326582747]
We propose a new accelerated first-order method called clipped-SSTM for smooth convex optimization with heavy-tailed distributed noise in gradients.
We prove new complexity that outperform state-of-the-art results in this case.
We derive the first non-trivial high-probability complexity bounds for SGD with clipping without light-tails assumption on the noise.
arXiv Detail & Related papers (2020-05-21T17:05:27Z) - Multi-View Spectral Clustering Tailored Tensor Low-Rank Representation [105.33409035876691]
This paper explores the problem of multi-view spectral clustering (MVSC) based on tensor low-rank modeling.
We design a novel structured tensor low-rank norm tailored to MVSC.
We show that the proposed method outperforms state-of-the-art methods to a significant extent.
arXiv Detail & Related papers (2020-04-30T11:52:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.