Experimental Design for Linear Functionals in Reproducing Kernel Hilbert
Spaces
- URL: http://arxiv.org/abs/2205.13627v1
- Date: Thu, 26 May 2022 20:56:25 GMT
- Title: Experimental Design for Linear Functionals in Reproducing Kernel Hilbert
Spaces
- Authors: Mojm\'ir Mutn\'y and Andreas Krause
- Abstract summary: We provide algorithms for constructing bias-aware designs for linear functionals.
We derive non-asymptotic confidence sets for fixed and adaptive designs under sub-Gaussian noise.
- Score: 102.08678737900541
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Optimal experimental design seeks to determine the most informative
allocation of experiments to infer an unknown statistical quantity. In this
work, we investigate the optimal design of experiments for {\em estimation of
linear functionals in reproducing kernel Hilbert spaces (RKHSs)}. This problem
has been extensively studied in the linear regression setting under an
estimability condition, which allows estimating parameters without bias. We
generalize this framework to RKHSs, and allow for the linear functional to be
only approximately inferred, i.e., with a fixed bias. This scenario captures
many important modern applications, such as estimation of gradient maps,
integrals, and solutions to differential equations. We provide algorithms for
constructing bias-aware designs for linear functionals. We derive
non-asymptotic confidence sets for fixed and adaptive designs under
sub-Gaussian noise, enabling us to certify estimation with bounded error with
high probability.
Related papers
- A Structure-Preserving Kernel Method for Learning Hamiltonian Systems [3.594638299627404]
A structure-preserving kernel ridge regression method is presented that allows the recovery of potentially high-dimensional and nonlinear Hamiltonian functions.
The paper extends kernel regression methods to problems in which loss functions involving linear functions of gradients are required.
A full error analysis is conducted that provides convergence rates using fixed and adaptive regularization parameters.
arXiv Detail & Related papers (2024-03-15T07:20:21Z) - Stochastic Marginal Likelihood Gradients using Neural Tangent Kernels [78.6096486885658]
We introduce lower bounds to the linearized Laplace approximation of the marginal likelihood.
These bounds are amenable togradient-based optimization and allow to trade off estimation accuracy against computational complexity.
arXiv Detail & Related papers (2023-06-06T19:02:57Z) - Promises and Pitfalls of the Linearized Laplace in Bayesian Optimization [73.80101701431103]
The linearized-Laplace approximation (LLA) has been shown to be effective and efficient in constructing Bayesian neural networks.
We study the usefulness of the LLA in Bayesian optimization and highlight its strong performance and flexibility.
arXiv Detail & Related papers (2023-04-17T14:23:43Z) - Kernel-based off-policy estimation without overlap: Instance optimality
beyond semiparametric efficiency [53.90687548731265]
We study optimal procedures for estimating a linear functional based on observational data.
For any convex and symmetric function class $mathcalF$, we derive a non-asymptotic local minimax bound on the mean-squared error.
arXiv Detail & Related papers (2023-01-16T02:57:37Z) - Functional Linear Regression of Cumulative Distribution Functions [20.96177061945288]
We propose functional ridge-regression-based estimation methods that estimate CDFs accurately everywhere.
We show estimation error upper bounds of $widetilde O(sqrtd/n)$ for fixed design, random design, and adversarial context cases.
We formalize infinite dimensional models where the parameter space is an infinite dimensional Hilbert space, and establish a self-normalized estimation error upper bound for this setting.
arXiv Detail & Related papers (2022-05-28T23:59:50Z) - Optimal oracle inequalities for solving projected fixed-point equations [53.31620399640334]
We study methods that use a collection of random observations to compute approximate solutions by searching over a known low-dimensional subspace of the Hilbert space.
We show how our results precisely characterize the error of a class of temporal difference learning methods for the policy evaluation problem with linear function approximation.
arXiv Detail & Related papers (2020-12-09T20:19:32Z) - Early stopping and polynomial smoothing in regression with reproducing
kernels [2.132096006921048]
We study the problem of early stopping for iterative learning algorithms in a reproducing kernel Hilbert space (RKHS)
We present a data-driven rule to perform early stopping without a validation set that is based on the so-called minimum discrepancy principle.
The proposed rule is proved to be minimax-optimal over different types of kernel spaces.
arXiv Detail & Related papers (2020-07-14T05:27:18Z) - SLEIPNIR: Deterministic and Provably Accurate Feature Expansion for
Gaussian Process Regression with Derivatives [86.01677297601624]
We propose a novel approach for scaling GP regression with derivatives based on quadrature Fourier features.
We prove deterministic, non-asymptotic and exponentially fast decaying error bounds which apply for both the approximated kernel as well as the approximated posterior.
arXiv Detail & Related papers (2020-03-05T14:33:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.