Improving Kernel-Based Nonasymptotic Simultaneous Confidence Bands
- URL: http://arxiv.org/abs/2401.15791v1
- Date: Sun, 28 Jan 2024 22:43:33 GMT
- Title: Improving Kernel-Based Nonasymptotic Simultaneous Confidence Bands
- Authors: Bal\'azs Csan\'ad Cs\'aji and B\'alint Horv\'ath
- Abstract summary: The paper studies the problem of constructing nonparametric simultaneous confidence bands with nonasymptotic and distribition-free guarantees.
The approach is based on the theory of Paley-Wiener kernel reproducing Hilbert spaces.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The paper studies the problem of constructing nonparametric simultaneous
confidence bands with nonasymptotic and distribition-free guarantees. The
target function is assumed to be band-limited and the approach is based on the
theory of Paley-Wiener reproducing kernel Hilbert spaces. The starting point of
the paper is a recently developed algorithm to which we propose three types of
improvements. First, we relax the assumptions on the noises by replacing the
symmetricity assumption with a weaker distributional invariance principle.
Then, we propose a more efficient way to estimate the norm of the target
function, and finally we enhance the construction of the confidence bands by
tightening the constraints of the underlying convex optimization problems. The
refinements are also illustrated through numerical experiments.
Related papers
- A Unified Theory of Stochastic Proximal Point Methods without Smoothness [52.30944052987393]
Proximal point methods have attracted considerable interest owing to their numerical stability and robustness against imperfect tuning.
This paper presents a comprehensive analysis of a broad range of variations of the proximal point method (SPPM)
arXiv Detail & Related papers (2024-05-24T21:09:19Z) - Likelihood Ratio Confidence Sets for Sequential Decision Making [51.66638486226482]
We revisit the likelihood-based inference principle and propose to use likelihood ratios to construct valid confidence sequences.
Our method is especially suitable for problems with well-specified likelihoods.
We show how to provably choose the best sequence of estimators and shed light on connections to online convex optimization.
arXiv Detail & Related papers (2023-11-08T00:10:21Z) - Stable Nonconvex-Nonconcave Training via Linear Interpolation [51.668052890249726]
This paper presents a theoretical analysis of linearahead as a principled method for stabilizing (large-scale) neural network training.
We argue that instabilities in the optimization process are often caused by the nonmonotonicity of the loss landscape and show how linear can help by leveraging the theory of nonexpansive operators.
arXiv Detail & Related papers (2023-10-20T12:45:12Z) - Provable convergence guarantees for black-box variational inference [19.421222110188605]
Black-box variational inference is widely used in situations where there is no proof that its optimization succeeds.
We provide rigorous guarantees that methods similar to those used in practice converge on realistic inference problems.
arXiv Detail & Related papers (2023-06-04T11:31:41Z) - Rockafellian Relaxation and Stochastic Optimization under Perturbations [0.056247917037481096]
We develop an optimistic" framework based on Rockafellian relaxations in which optimization is conducted not only over the original decision space but also jointly with a choice of model.
The framework centers on the novel concepts of exact and limit-exact Rockafellians, with interpretations of negative'' regularization emerging in certain settings.
arXiv Detail & Related papers (2022-04-10T20:02:41Z) - Meta-Learning Hypothesis Spaces for Sequential Decision-making [79.73213540203389]
We propose to meta-learn a kernel from offline data (Meta-KeL)
Under mild conditions, we guarantee that our estimated RKHS yields valid confidence sets.
We also empirically evaluate the effectiveness of our approach on a Bayesian optimization task.
arXiv Detail & Related papers (2022-02-01T17:46:51Z) - On the Convergence of Stochastic Extragradient for Bilinear Games with
Restarted Iteration Averaging [96.13485146617322]
We present an analysis of the ExtraGradient (SEG) method with constant step size, and present variations of the method that yield favorable convergence.
We prove that when augmented with averaging, SEG provably converges to the Nash equilibrium, and such a rate is provably accelerated by incorporating a scheduled restarting procedure.
arXiv Detail & Related papers (2021-06-30T17:51:36Z) - High Probability Complexity Bounds for Non-Smooth Stochastic Optimization with Heavy-Tailed Noise [51.31435087414348]
It is essential to theoretically guarantee that algorithms provide small objective residual with high probability.
Existing methods for non-smooth convex optimization have complexity bounds with dependence on confidence level.
We propose novel stepsize rules for two methods with gradient clipping.
arXiv Detail & Related papers (2021-06-10T17:54:21Z) - A Study of Condition Numbers for First-Order Optimization [12.072067586666382]
We introduce a class of perturbations quantified via a new norm, called *-norm.
We show that smoothness and strong convexity can be heavily impacted by arbitrarily small perturbations.
We propose a notion of continuity of the metrics, which is essential for a robust tuning strategy.
arXiv Detail & Related papers (2020-12-10T16:17:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.