Inverse learning in Hilbert scales
- URL: http://arxiv.org/abs/2002.10208v1
- Date: Mon, 24 Feb 2020 12:49:54 GMT
- Title: Inverse learning in Hilbert scales
- Authors: Abhishake Rastogi and Peter Math\'e
- Abstract summary: We study the linear ill-posed inverse problem with noisy data in the statistical learning setting.
Approximate reconstructions from random noisy data are sought with general regularization schemes in Hilbert scale.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the linear ill-posed inverse problem with noisy data in the
statistical learning setting. Approximate reconstructions from random noisy
data are sought with general regularization schemes in Hilbert scale. We
discuss the rates of convergence for the regularized solution under the prior
assumptions and a certain link condition. We express the error in terms of
certain distance functions. For regression functions with smoothness given in
terms of source conditions the error bound can then be explicitly established.
Related papers
- Multivariate root-n-consistent smoothing parameter free matching estimators and estimators of inverse density weighted expectations [51.000851088730684]
We develop novel modifications of nearest-neighbor and matching estimators which converge at the parametric $sqrt n $-rate.
We stress that our estimators do not involve nonparametric function estimators and in particular do not rely on sample-size dependent parameters smoothing.
arXiv Detail & Related papers (2024-07-11T13:28:34Z) - Uniform Inference for Subsampled Moment Regression [19.014535120129338]
We present a method for constructing a confidence region for the solution to a conditional moment equation.
The method is applicable to the construction of confidence regions for conditional average treatment effects in randomized experiments.
arXiv Detail & Related papers (2024-05-13T15:46:11Z) - Learning Memory Kernels in Generalized Langevin Equations [5.266892492931388]
We introduce a novel approach for learning memory kernels in Generalized Langevin Equations.
This approach initially utilizes a regularized Prony method to estimate correlation functions from trajectory data, followed by regression over a Sobolev norm-based loss function with RKHS regularization.
arXiv Detail & Related papers (2024-02-18T21:01:49Z) - Kernel-based off-policy estimation without overlap: Instance optimality
beyond semiparametric efficiency [53.90687548731265]
We study optimal procedures for estimating a linear functional based on observational data.
For any convex and symmetric function class $mathcalF$, we derive a non-asymptotic local minimax bound on the mean-squared error.
arXiv Detail & Related papers (2023-01-16T02:57:37Z) - Statistical Inverse Problems in Hilbert Scales [0.0]
We study the Tikhonov regularization scheme in Hilbert scales for the nonlinear statistical inverse problem with a general noise.
The regularizing norm in this scheme is stronger than the norm in Hilbert space.
arXiv Detail & Related papers (2022-08-28T21:06:05Z) - Experimental Design for Linear Functionals in Reproducing Kernel Hilbert
Spaces [102.08678737900541]
We provide algorithms for constructing bias-aware designs for linear functionals.
We derive non-asymptotic confidence sets for fixed and adaptive designs under sub-Gaussian noise.
arXiv Detail & Related papers (2022-05-26T20:56:25Z) - On the Double Descent of Random Features Models Trained with SGD [78.0918823643911]
We study properties of random features (RF) regression in high dimensions optimized by gradient descent (SGD)
We derive precise non-asymptotic error bounds of RF regression under both constant and adaptive step-size SGD setting.
We observe the double descent phenomenon both theoretically and empirically.
arXiv Detail & Related papers (2021-10-13T17:47:39Z) - Uniform Function Estimators in Reproducing Kernel Hilbert Spaces [0.0]
This paper addresses the problem of regression to reconstruct functions, which are observed with superimposed errors at random locations.
It is demonstrated that the estimator, which is often derived by employing Gaussian random fields, converges in the mean norm of the kernel reproducing Hilbert space to the conditional expectation.
arXiv Detail & Related papers (2021-08-16T08:13:28Z) - Benign Overfitting of Constant-Stepsize SGD for Linear Regression [122.70478935214128]
inductive biases are central in preventing overfitting empirically.
This work considers this issue in arguably the most basic setting: constant-stepsize SGD for linear regression.
We reflect on a number of notable differences between the algorithmic regularization afforded by (unregularized) SGD in comparison to ordinary least squares.
arXiv Detail & Related papers (2021-03-23T17:15:53Z) - Optimal oracle inequalities for solving projected fixed-point equations [53.31620399640334]
We study methods that use a collection of random observations to compute approximate solutions by searching over a known low-dimensional subspace of the Hilbert space.
We show how our results precisely characterize the error of a class of temporal difference learning methods for the policy evaluation problem with linear function approximation.
arXiv Detail & Related papers (2020-12-09T20:19:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.