Deep Learning for Subspace Regression
- URL: http://arxiv.org/abs/2509.23249v2
- Date: Wed, 01 Oct 2025 12:37:24 GMT
- Title: Deep Learning for Subspace Regression
- Authors: Vladimir Fanaskov, Vladislav Trifonov, Alexander Rudikov, Ekaterina Muravleva, Ivan Oseledets,
- Abstract summary: A practical way to apply such a scheme is to compute subspaces for a selected set of parameters in the computationally demanding offline stage.<n>For realistic problems the space of parameters is high dimensional, which renders classical strategies infeasible or unreliable.<n>We propose to relax the problem to regression, introduce several loss functions suitable for subspace data, and use a neural network as an approximation to high-dimensional target function.
- Score: 42.94349364701736
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: It is often possible to perform reduced order modelling by specifying linear subspace which accurately captures the dynamics of the system. This approach becomes especially appealing when linear subspace explicitly depends on parameters of the problem. A practical way to apply such a scheme is to compute subspaces for a selected set of parameters in the computationally demanding offline stage and in the online stage approximate subspace for unknown parameters by interpolation. For realistic problems the space of parameters is high dimensional, which renders classical interpolation strategies infeasible or unreliable. We propose to relax the interpolation problem to regression, introduce several loss functions suitable for subspace data, and use a neural network as an approximation to high-dimensional target function. To further simplify a learning problem we introduce redundancy: in place of predicting subspace of a given dimension we predict larger subspace. We show theoretically that this strategy decreases the complexity of the mapping for elliptic eigenproblems with constant coefficients and makes the mapping smoother for general smooth function on the Grassmann manifold. Empirical results also show that accuracy significantly improves when larger-than-needed subspaces are predicted. With the set of numerical illustrations we demonstrate that subspace regression can be useful for a range of tasks including parametric eigenproblems, deflation techniques, relaxation methods, optimal control and solution of parametric partial differential equations.
Related papers
- Solving Inverse Parametrized Problems via Finite Elements and Extreme Learning Networks [0.0]
We develop a parametric-based reduced-order modeling framework for parameter-dependent partial differential equations.<n>We derive rigorous error estimates that explicitly quantify the interplay between spatial discretization and parameter approximation.<n>The proposed framework is applied to inverse problems in quantitative photoacoustic tomography, where we derive potential and parameter reconstruction error estimates.
arXiv Detail & Related papers (2026-02-16T14:01:50Z) - Random Subspace Cubic-Regularization Methods, with Applications to Low-Rank Functions [0.0]
We propose and analyze random subspace variants of the second-order Adaptive Regularization.<n>Our methods iteratively restrict the search space to some random subspace of the parameters, constructing and minimizing a local model only within this subspace.
arXiv Detail & Related papers (2025-01-16T18:37:59Z) - Adapting Projection-Based Reduced-Order Models using Projected Gaussian Process [5.492716202049269]
We propose a Projected Gaussian Process (pGP) to learn a mapping from the parameter space to the Grassmann manifold that contains the optimal subspaces.<n>As a statistical learning approach, the proposed pGP allows us to optimally estimate (or tune) the model parameters from data and quantify the statistical uncertainty associated with the prediction.
arXiv Detail & Related papers (2024-10-18T00:02:43Z) - On Learning Gaussian Multi-index Models with Gradient Flow [57.170617397894404]
We study gradient flow on the multi-index regression problem for high-dimensional Gaussian data.
We consider a two-timescale algorithm, whereby the low-dimensional link function is learnt with a non-parametric model infinitely faster than the subspace parametrizing the low-rank projection.
arXiv Detail & Related papers (2023-10-30T17:55:28Z) - Nonparametric Linear Feature Learning in Regression Through Regularisation [0.0]
We propose a novel method for joint linear feature learning and non-parametric function estimation.
By using alternative minimisation, we iteratively rotate the data to improve alignment with leading directions.
We establish that the expected risk of our method converges to the minimal risk under minimal assumptions and with explicit rates.
arXiv Detail & Related papers (2023-07-24T12:52:55Z) - Kernel-based off-policy estimation without overlap: Instance optimality
beyond semiparametric efficiency [53.90687548731265]
We study optimal procedures for estimating a linear functional based on observational data.
For any convex and symmetric function class $mathcalF$, we derive a non-asymptotic local minimax bound on the mean-squared error.
arXiv Detail & Related papers (2023-01-16T02:57:37Z) - Distributed Sketching for Randomized Optimization: Exact
Characterization, Concentration and Lower Bounds [54.51566432934556]
We consider distributed optimization methods for problems where forming the Hessian is computationally challenging.
We leverage randomized sketches for reducing the problem dimensions as well as preserving privacy and improving straggler resilience in asynchronous distributed systems.
arXiv Detail & Related papers (2022-03-18T05:49:13Z) - Gaussian Process Subspace Regression for Model Reduction [7.41244589428771]
Subspace-valued functions arise in a wide range of problems, including parametric reduced order modeling (PROM)
In PROM, each parameter point can be associated with a subspace, which is used for Petrov-Galerkin projections of large system matrices.
We propose a novel Bayesian non model for subspace prediction: the Gaussian Process Subspace regression (GPS) model.
arXiv Detail & Related papers (2021-07-09T20:41:23Z) - Optimal oracle inequalities for solving projected fixed-point equations [53.31620399640334]
We study methods that use a collection of random observations to compute approximate solutions by searching over a known low-dimensional subspace of the Hilbert space.
We show how our results precisely characterize the error of a class of temporal difference learning methods for the policy evaluation problem with linear function approximation.
arXiv Detail & Related papers (2020-12-09T20:19:32Z) - Implicit differentiation of Lasso-type models for hyperparameter
optimization [82.73138686390514]
We introduce an efficient implicit differentiation algorithm, without matrix inversion, tailored for Lasso-type problems.
Our approach scales to high-dimensional data by leveraging the sparsity of the solutions.
arXiv Detail & Related papers (2020-02-20T18:43:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.