Spatially Adaptive Online Prediction of Piecewise Regular Functions
- URL: http://arxiv.org/abs/2203.16587v1
- Date: Wed, 30 Mar 2022 18:15:56 GMT
- Title: Spatially Adaptive Online Prediction of Piecewise Regular Functions
- Authors: Sabyasachi Chatterjee and Subhajit Goswami
- Abstract summary: We consider the problem of estimating piecewise regular functions in an online setting.
We propose a modified version of a recently developed online learning algorithm called the sleeping experts aggregation algorithm.
- Score: 12.18340575383456
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We consider the problem of estimating piecewise regular functions in an
online setting, i.e., the data arrive sequentially and at any round our task is
to predict the value of the true function at the next revealed point using the
available data from past predictions. We propose a suitably modified version of
a recently developed online learning algorithm called the sleeping experts
aggregation algorithm. We show that this estimator satisfies oracle risk bounds
simultaneously for all local regions of the domain. As concrete instantiations
of the expert aggregation algorithm proposed here, we study an online mean
aggregation and an online linear regression aggregation algorithm where experts
correspond to the set of dyadic subrectangles of the domain. The resulting
algorithms are near linear time computable in the sample size. We specifically
focus on the performance of these online algorithms in the context of
estimating piecewise polynomial and bounded variation function classes in the
fixed design setup. The simultaneous oracle risk bounds we obtain for these
estimators in this context provide new and improved (in certain aspects)
guarantees even in the batch setting and are not available for the state of the
art batch learning estimators.
Related papers
- A naive aggregation algorithm for improving generalization in a class of learning problems [0.0]
We present a naive aggregation algorithm for a typical learning problem with expert advice setting.
In particular, we consider a class of learning problem of point estimations for modeling high-dimensional nonlinear functions.
arXiv Detail & Related papers (2024-09-06T15:34:17Z) - Score-based change point detection via tracking the best of infinitely many experts [5.156484100374059]
We suggest a novel algorithm for online change point detection based on sequential score function estimation and tracking the best expert approach.
The algorithm shows a promising performance in numerical experiments on artificial and real-world data sets.
arXiv Detail & Related papers (2024-08-26T07:56:17Z) - Structured Prediction in Online Learning [66.36004256710824]
We study a theoretical and algorithmic framework for structured prediction in the online learning setting.
We show that our algorithm is a generalisation of optimal algorithms from the supervised learning setting.
We consider a second algorithm designed especially for non-stationary data distributions, including adversarial data.
arXiv Detail & Related papers (2024-06-18T07:45:02Z) - Online Estimation via Offline Estimation: An Information-Theoretic Framework [75.80823630681323]
Motivated by connections between estimation and interactive decision making, we ask: is it possible to convert offline estimation algorithms into online estimation algorithms?
We introduce a new framework, Oracle-Efficient Online Estimation (OEOE), where the learner can only interact with the data stream indirectly through a sequence of offline estimators produced by a black-box algorithm operating on the stream.
arXiv Detail & Related papers (2024-04-15T20:19:18Z) - Likelihood Ratio Confidence Sets for Sequential Decision Making [51.66638486226482]
We revisit the likelihood-based inference principle and propose to use likelihood ratios to construct valid confidence sequences.
Our method is especially suitable for problems with well-specified likelihoods.
We show how to provably choose the best sequence of estimators and shed light on connections to online convex optimization.
arXiv Detail & Related papers (2023-11-08T00:10:21Z) - High-Probability Risk Bounds via Sequential Predictors [20.741036493022442]
We show that online to batch conversions applied to general online learning algorithms can bypass regret bounds.
We obtain nearly optimal high-probability risk bounds for several classical statistical estimation problems.
Our analysis relies on the fact that many online learning algorithms are improper, as they are not restricted to use predictors from a given reference class.
arXiv Detail & Related papers (2023-08-15T06:19:31Z) - A Regression Approach to Learning-Augmented Online Algorithms [17.803569868141647]
We introduce this approach in this paper, and explore it in the context of a general online search framework.
We show nearly tight bounds on the sample complexity of this regression problem, and extend our results to the agnostic setting.
From a technical standpoint, we show that the key is to incorporate online optimization benchmarks in the design of the loss function for the regression problem.
arXiv Detail & Related papers (2022-05-18T04:29:14Z) - Scalable computation of prediction intervals for neural networks via
matrix sketching [79.44177623781043]
Existing algorithms for uncertainty estimation require modifying the model architecture and training procedure.
This work proposes a new algorithm that can be applied to a given trained neural network and produces approximate prediction intervals.
arXiv Detail & Related papers (2022-05-06T13:18:31Z) - Domain-Adjusted Regression or: ERM May Already Learn Features Sufficient
for Out-of-Distribution Generalization [52.7137956951533]
We argue that devising simpler methods for learning predictors on existing features is a promising direction for future research.
We introduce Domain-Adjusted Regression (DARE), a convex objective for learning a linear predictor that is provably robust under a new model of distribution shift.
Under a natural model, we prove that the DARE solution is the minimax-optimal predictor for a constrained set of test distributions.
arXiv Detail & Related papers (2022-02-14T16:42:16Z) - Smoothed Online Learning is as Easy as Statistical Learning [77.00766067963195]
We provide the first oracle-efficient, no-regret algorithms in this setting.
We show that if a function class is learnable in the classical setting, then there is an oracle-efficient, no-regret algorithm for contextual bandits.
arXiv Detail & Related papers (2022-02-09T19:22:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.