Robust Least-Squares Optimization for Data-Driven Predictive Control: A Geometric Approach
- URL: http://arxiv.org/abs/2511.09242v1
- Date: Thu, 13 Nov 2025 01:42:24 GMT
- Title: Robust Least-Squares Optimization for Data-Driven Predictive Control: A Geometric Approach
- Authors: Shreyas Bharadwaj, Bamdev Mishra, Cyrus Mostajeran, Alberto Padoan, Jeremy Coulson, Ravi N. Banavar,
- Abstract summary: The uncertainty in this geometric relation is modeled as a ball on the Grassmannian manifold.<n>The inner admits a closed-form solution, enabling an efficient algorithm with a transparent geometric interpretation.
- Score: 5.2456503500395275
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The paper studies a geometrically robust least-squares problem that extends classical and norm-based robust formulations. Rather than minimizing residual error for fixed or perturbed data, we interpret least-squares as enforcing approximate subspace inclusion between measured and true data spaces. The uncertainty in this geometric relation is modeled as a metric ball on the Grassmannian manifold, leading to a min-max problem over Euclidean and manifold variables. The inner maximization admits a closed-form solution, enabling an efficient algorithm with a transparent geometric interpretation. Applied to robust finite-horizon linear-quadratic tracking in data-enabled predictive control, the method improves upon existing robust least-squares formulations, achieving stronger robustness and favorable scaling under small uncertainty.
Related papers
- Online Inference of Constrained Optimization: Primal-Dual Optimality and Sequential Quadratic Programming [55.848340925419286]
We study online statistical inference for the solutions of quadratic optimization problems with equality and inequality constraints.<n>We develop a sequential programming (SSQP) method to solve these problems, where the step direction is computed by sequentially performing an approximation of the objective and a linear approximation of the constraints.<n>We show that our method global almost moving-average convergence and exhibits local normality with an optimal primal-dual limiting matrix in the sense of Hjek and Le Cam.
arXiv Detail & Related papers (2025-11-27T06:16:17Z) - Topolow: Force-Directed Euclidean Embedding of Dissimilarity Data with Robustness Against Non-Metricity and Sparsity [0.8287206589886881]
Topolow is a physics-inspired, gradient-free optimization framework for such embedding problems.<n>Topolow does not require the input dissimilarities to be metric, making it a robust solution for embedding non-metric measurements into a valid Euclidean space.<n>This paper formalizes the algorithm, first introduced as Topolow in the context of antigenic mapping in (Arhami and Rohani, 2025)
arXiv Detail & Related papers (2025-08-03T12:19:17Z) - Early-Stopped Mirror Descent for Linear Regression over Convex Bodies [14.30754799752932]
We study the setting of high-dimensional linear regression under additive Gaussian noise.<n>We show that the worst-case risk of unconstrained early-stopped mirror descent with an appropriate potential is at most that of the least squares estimator constrained to the convex body.
arXiv Detail & Related papers (2025-03-05T11:59:31Z) - Sample-Efficient Geometry Reconstruction from Euclidean Distances using Non-Convex Optimization [7.114174944371803]
The problem of finding suitable point embedding Euclidean distance information point pairs arises both as a core task and as a sub-machine learning learning problem.
In this paper, we aim to solve this problem given a minimal number of samples.
arXiv Detail & Related papers (2024-10-22T13:02:12Z) - SPARE: Symmetrized Point-to-Plane Distance for Robust Non-Rigid 3D Registration [77.13381026159111]
We propose SPARE, a novel formulation that utilizes a symmetrized point-to-plane distance for robust non-rigid registration.<n>The proposed method greatly improves the accuracy of non-rigid registration problems and maintains relatively high solution efficiency.
arXiv Detail & Related papers (2024-05-30T15:55:04Z) - Riemannian stochastic optimization methods avoid strict saddle points [68.80251170757647]
We show that policies under study avoid strict saddle points / submanifolds with probability 1.
This result provides an important sanity check as it shows that, almost always, the limit state of an algorithm can only be a local minimizer.
arXiv Detail & Related papers (2023-11-04T11:12:24Z) - Multistage Stochastic Optimization via Kernels [3.7565501074323224]
We develop a non-parametric, data-driven, tractable approach for solving multistage optimization problems.
We show that the proposed method produces decision rules with near-optimal average performance.
arXiv Detail & Related papers (2023-03-11T23:19:32Z) - Optimal oracle inequalities for solving projected fixed-point equations [53.31620399640334]
We study methods that use a collection of random observations to compute approximate solutions by searching over a known low-dimensional subspace of the Hilbert space.
We show how our results precisely characterize the error of a class of temporal difference learning methods for the policy evaluation problem with linear function approximation.
arXiv Detail & Related papers (2020-12-09T20:19:32Z) - Effective Dimension Adaptive Sketching Methods for Faster Regularized
Least-Squares Optimization [56.05635751529922]
We propose a new randomized algorithm for solving L2-regularized least-squares problems based on sketching.
We consider two of the most popular random embeddings, namely, Gaussian embeddings and the Subsampled Randomized Hadamard Transform (SRHT)
arXiv Detail & Related papers (2020-06-10T15:00:09Z) - High-Dimensional Robust Mean Estimation via Gradient Descent [73.61354272612752]
We show that the problem of robust mean estimation in the presence of a constant adversarial fraction can be solved by gradient descent.
Our work establishes an intriguing connection between the near non-lemma estimation and robust statistics.
arXiv Detail & Related papers (2020-05-04T10:48:04Z) - Geometry, Computation, and Optimality in Stochastic Optimization [24.154336772159745]
We study computational and statistical consequences of problem geometry in and online optimization.<n>By focusing on constraint set and gradient geometry, we characterize the problem families for which- and adaptive-gradient methods are (minimax) optimal.
arXiv Detail & Related papers (2019-09-23T16:14:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.