Statistical Inverse Problems in Hilbert Scales
- URL: http://arxiv.org/abs/2208.13289v1
- Date: Sun, 28 Aug 2022 21:06:05 GMT
- Title: Statistical Inverse Problems in Hilbert Scales
- Authors: Abhishake Rastogi
- Abstract summary: We study the Tikhonov regularization scheme in Hilbert scales for the nonlinear statistical inverse problem with a general noise.
The regularizing norm in this scheme is stronger than the norm in Hilbert space.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we study the Tikhonov regularization scheme in Hilbert scales
for the nonlinear statistical inverse problem with a general noise. The
regularizing norm in this scheme is stronger than the norm in Hilbert space. We
focus on developing a theoretical analysis for this scheme based on the
conditional stability estimates. We utilize the concept of the distance
function to establish the high probability estimates of the direct and
reconstruction error in Reproducing kernel Hilbert space setting. Further, the
explicit rates of convergence in terms of sample size are established for the
oversmoothing case and the regular case over the regularity class defined
through appropriate source condition. Our results improve and generalize
previous results obtained in related settings.
Related papers
- On regularized Radon-Nikodym differentiation [3.047411947074805]
We discuss the problem of estimating Radon-Nikodym derivatives.
We employ the general regularization scheme in reproducing kernel Hilbert spaces.
We find that the reconstruction of Radon-Nikodym derivatives at any particular point can be done with high order of accuracy.
arXiv Detail & Related papers (2023-08-15T17:27:16Z) - Bayesian Renormalization [68.8204255655161]
We present a fully information theoretic approach to renormalization inspired by Bayesian statistical inference.
The main insight of Bayesian Renormalization is that the Fisher metric defines a correlation length that plays the role of an emergent RG scale.
We provide insight into how the Bayesian Renormalization scheme relates to existing methods for data compression and data generation.
arXiv Detail & Related papers (2023-05-17T18:00:28Z) - On the Importance of Gradient Norm in PAC-Bayesian Bounds [92.82627080794491]
We propose a new generalization bound that exploits the contractivity of the log-Sobolev inequalities.
We empirically analyze the effect of this new loss-gradient norm term on different neural architectures.
arXiv Detail & Related papers (2022-10-12T12:49:20Z) - Experimental Design for Linear Functionals in Reproducing Kernel Hilbert
Spaces [102.08678737900541]
We provide algorithms for constructing bias-aware designs for linear functionals.
We derive non-asymptotic confidence sets for fixed and adaptive designs under sub-Gaussian noise.
arXiv Detail & Related papers (2022-05-26T20:56:25Z) - Optimal Learning Rates for Regularized Least-Squares with a Fourier
Capacity Condition [14.910167993978487]
We derive minimax adaptive rates for a new, broad class of Tikhonov-regularized learning problems in Hilbert scales.
We demonstrate that the spectrum of the Mercer operator can be inferred in the presence of tight'' embeddings of suitable Hilbert scales.
arXiv Detail & Related papers (2022-04-16T18:32:33Z) - Nonconvex Stochastic Scaled-Gradient Descent and Generalized Eigenvector
Problems [98.34292831923335]
Motivated by the problem of online correlation analysis, we propose the emphStochastic Scaled-Gradient Descent (SSD) algorithm.
We bring these ideas together in an application to online correlation analysis, deriving for the first time an optimal one-time-scale algorithm with an explicit rate of local convergence to normality.
arXiv Detail & Related papers (2021-12-29T18:46:52Z) - On the Double Descent of Random Features Models Trained with SGD [78.0918823643911]
We study properties of random features (RF) regression in high dimensions optimized by gradient descent (SGD)
We derive precise non-asymptotic error bounds of RF regression under both constant and adaptive step-size SGD setting.
We observe the double descent phenomenon both theoretically and empirically.
arXiv Detail & Related papers (2021-10-13T17:47:39Z) - Optimal policy evaluation using kernel-based temporal difference methods [78.83926562536791]
We use kernel Hilbert spaces for estimating the value function of an infinite-horizon discounted Markov reward process.
We derive a non-asymptotic upper bound on the error with explicit dependence on the eigenvalues of the associated kernel operator.
We prove minimax lower bounds over sub-classes of MRPs.
arXiv Detail & Related papers (2021-09-24T14:48:20Z) - Uniform Function Estimators in Reproducing Kernel Hilbert Spaces [0.0]
This paper addresses the problem of regression to reconstruct functions, which are observed with superimposed errors at random locations.
It is demonstrated that the estimator, which is often derived by employing Gaussian random fields, converges in the mean norm of the kernel reproducing Hilbert space to the conditional expectation.
arXiv Detail & Related papers (2021-08-16T08:13:28Z) - Inverse learning in Hilbert scales [0.0]
We study the linear ill-posed inverse problem with noisy data in the statistical learning setting.
Approximate reconstructions from random noisy data are sought with general regularization schemes in Hilbert scale.
arXiv Detail & Related papers (2020-02-24T12:49:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.