On efficient algorithms for computing near-best polynomial
approximations to high-dimensional, Hilbert-valued functions from limited
samples
- URL: http://arxiv.org/abs/2203.13908v2
- Date: Mon, 6 Nov 2023 21:03:28 GMT
- Title: On efficient algorithms for computing near-best polynomial
approximations to high-dimensional, Hilbert-valued functions from limited
samples
- Authors: Ben Adcock, Simone Brugiapaglia, Nick Dexter, Sebastian Moraga
- Abstract summary: Sparse approximation has become indispensable for approxing smooth, high-dimensional functions from limited samples.
This paper introduces algorithms and theoretical guarantees that assert exponential or algebraic rates of convergence, along with robustness to sampling, algorithmic and physical discretization errors.
- Score: 1.0650780147044159
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Sparse polynomial approximation has become indispensable for approximating
smooth, high- or infinite-dimensional functions from limited samples. This is a
key task in computational science and engineering, e.g., surrogate modelling in
uncertainty quantification where the function is the solution map of a
parametric or stochastic differential equation (DE). Yet, sparse polynomial
approximation lacks a complete theory. On the one hand, there is a
well-developed theory of best $s$-term polynomial approximation, which asserts
exponential or algebraic rates of convergence for holomorphic functions. On the
other, there are increasingly mature methods such as (weighted)
$\ell^1$-minimization for computing such approximations. While the sample
complexity of these methods has been analyzed with compressed sensing, whether
they achieve best $s$-term approximation rates is not fully understood.
Furthermore, these methods are not algorithms per se, as they involve exact
minimizers of nonlinear optimization problems.
This paper closes these gaps. Specifically, we consider the following
question: are there robust, efficient algorithms for computing approximations
to finite- or infinite-dimensional, holomorphic and Hilbert-valued functions
from limited samples that achieve best $s$-term rates? We answer this
affirmatively by introducing algorithms and theoretical guarantees that assert
exponential or algebraic rates of convergence, along with robustness to
sampling, algorithmic, and physical discretization errors. We tackle both
scalar- and Hilbert-valued functions, this being key to parametric or
stochastic DEs. Our results involve significant developments of existing
techniques, including a novel restarted primal-dual iteration for solving
weighted $\ell^1$-minimization problems in Hilbert spaces. Our theory is
supplemented by numerical experiments demonstrating the efficacy of these
algorithms.
Related papers
- Stochastic Optimization for Non-convex Problem with Inexact Hessian
Matrix, Gradient, and Function [99.31457740916815]
Trust-region (TR) and adaptive regularization using cubics have proven to have some very appealing theoretical properties.
We show that TR and ARC methods can simultaneously provide inexact computations of the Hessian, gradient, and function values.
arXiv Detail & Related papers (2023-10-18T10:29:58Z) - High-Probability Bounds for Stochastic Optimization and Variational
Inequalities: the Case of Unbounded Variance [59.211456992422136]
We propose algorithms with high-probability convergence results under less restrictive assumptions.
These results justify the usage of the considered methods for solving problems that do not fit standard functional classes in optimization.
arXiv Detail & Related papers (2023-02-02T10:37:23Z) - Stochastic Inexact Augmented Lagrangian Method for Nonconvex Expectation
Constrained Optimization [88.0031283949404]
Many real-world problems have complicated non functional constraints and use a large number of data points.
Our proposed method outperforms an existing method with the previously best-known result.
arXiv Detail & Related papers (2022-12-19T14:48:54Z) - Complexity of Inexact Proximal Point Algorithm for minimizing convex functions with Holderian Growth [1.9643748953805935]
We derive nonasymptotic complexity of exact and inexact PPA to minimize convex functions under $gamma-$Holderian growth.
Our numerical tests show improvements over existing restarting versions of the Subgradient Method.
arXiv Detail & Related papers (2021-08-10T07:15:07Z) - A stochastic linearized proximal method of multipliers for convex
stochastic optimization with expectation constraints [8.133190610747974]
We present a computable approximation type algorithm, namely the linearized proximal convex method of multipliers.
Some preliminary numerical results demonstrate the performance of the proposed algorithm.
arXiv Detail & Related papers (2021-06-22T07:24:17Z) - Finding Global Minima via Kernel Approximations [90.42048080064849]
We consider the global minimization of smooth functions based solely on function evaluations.
In this paper, we consider an approach that jointly models the function to approximate and finds a global minimum.
arXiv Detail & Related papers (2020-12-22T12:59:30Z) - Optimal oracle inequalities for solving projected fixed-point equations [53.31620399640334]
We study methods that use a collection of random observations to compute approximate solutions by searching over a known low-dimensional subspace of the Hilbert space.
We show how our results precisely characterize the error of a class of temporal difference learning methods for the policy evaluation problem with linear function approximation.
arXiv Detail & Related papers (2020-12-09T20:19:32Z) - Hybrid Variance-Reduced SGD Algorithms For Nonconvex-Concave Minimax
Problems [26.24895953952318]
We develop an algorithm to solve a class of non-gence minimax problems.
They can also work with both single or two mini-batch derivatives.
arXiv Detail & Related papers (2020-06-27T03:05:18Z) - Reinforcement Learning with General Value Function Approximation:
Provably Efficient Approach via Bounded Eluder Dimension [124.7752517531109]
We establish a provably efficient reinforcement learning algorithm with general value function approximation.
We show that our algorithm achieves a regret bound of $widetildeO(mathrmpoly(dH)sqrtT)$ where $d$ is a complexity measure.
Our theory generalizes recent progress on RL with linear value function approximation and does not make explicit assumptions on the model of the environment.
arXiv Detail & Related papers (2020-05-21T17:36:09Z) - Inexact and Stochastic Generalized Conditional Gradient with Augmented
Lagrangian and Proximal Step [2.0196229393131726]
We analyze inexact and versions of the CGALP algorithm developed in the authors' previous paper.
This allows one to compute some gradients, terms, and/or linear minimization oracles in an inexact fashion.
We show convergence of the Lagrangian to an optimum and feasibility of the affine constraint.
arXiv Detail & Related papers (2020-05-11T14:52:16Z) - Efficient algorithms for multivariate shape-constrained convex
regression problems [9.281671380673306]
We prove that the least squares estimator is computable via solving a constrained convex programming (QP) problem with $(n+1)d$ variables and at least $n(n-1)$ linear inequality constraints.
For solving the generally very large-scale convex QP, we design two efficient algorithms, one is the symmetric Gauss-Seidel based alternating direction method of multipliers (tt sGS-ADMM), and the other is the proximal augmented Lagrangian method (tt pALM) with the subproblems solved by the semismooth Newton method (t
arXiv Detail & Related papers (2020-02-26T11:18:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.