Tight Concentrations and Confidence Sequences from the Regret of
Universal Portfolio
- URL: http://arxiv.org/abs/2110.14099v1
- Date: Wed, 27 Oct 2021 00:44:32 GMT
- Title: Tight Concentrations and Confidence Sequences from the Regret of
Universal Portfolio
- Authors: Francesco Orabona and Kwang-Sung Jun
- Abstract summary: Jun and Orabona [COLT'19] have shown how to easily convert the regret guarantee of an online betting algorithm into a time-uniform concentration inequality.
We show that we can go even further: We show that the regret of a minimax betting algorithm gives rise to a new implicit empirical time-uniform concentration.
- Score: 30.750408480772027
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A classic problem in statistics is the estimation of the expectation of
random variables from samples. This gives rise to the tightly connected
problems of deriving concentration inequalities and confidence sequences, that
is confidence intervals that hold uniformly over time. Jun and Orabona
[COLT'19] have shown how to easily convert the regret guarantee of an online
betting algorithm into a time-uniform concentration inequality. Here, we show
that we can go even further: We show that the regret of a minimax betting
algorithm gives rise to a new implicit empirical time-uniform concentration. In
particular, we use a new data-dependent regret guarantee of the universal
portfolio algorithm. We then show how to invert the new concentration in two
different ways: in an exact way with a numerical algorithm and symbolically in
an approximate way. Finally, we show empirically that our algorithms have
state-of-the-art performance in terms of the width of the confidence sequences
up to a moderately large amount of samples. In particular, our numerically
obtained confidence sequences are never vacuous, even with a single sample.
Related papers
- Simultaneous Inference for Local Structural Parameters with Random Forests [19.014535120129338]
We construct simultaneous confidence intervals for solutions to conditional moment equations.
We obtain several new order-explicit results on the concentration and normal approximation of high-dimensional U.S.
As a by-product, we obtain several new order-explicit results on the concentration and normal approximation of high-dimensional U.S.
arXiv Detail & Related papers (2024-05-13T15:46:11Z) - An Improved Algorithm for Learning Drifting Discrete Distributions [2.2191203337341525]
We present a new adaptive algorithm for learning discrete distributions under distribution drift.
We observe a sequence of independent samples from a discrete distribution that is changing over time, and the goal is to estimate the current distribution.
To use more samples, we must resort to samples further in the past, and we incur a drift error due to the bias introduced by the change in distribution.
We present a novel adaptive algorithm that can solve this trade-off without any prior knowledge of the drift.
arXiv Detail & Related papers (2024-03-08T16:54:27Z) - Best Arm Identification with Fixed Budget: A Large Deviation Perspective [54.305323903582845]
We present sred, a truly adaptive algorithm that can reject arms in it any round based on the observed empirical gaps between the rewards of various arms.
In particular, we present sred, a truly adaptive algorithm that can reject arms in it any round based on the observed empirical gaps between the rewards of various arms.
arXiv Detail & Related papers (2023-12-19T13:17:43Z) - Binary Classification with Confidence Difference [100.08818204756093]
This paper delves into a novel weakly supervised binary classification problem called confidence-difference (ConfDiff) classification.
We propose a risk-consistent approach to tackle this problem and show that the estimation error bound the optimal convergence rate.
We also introduce a risk correction approach to mitigate overfitting problems, whose consistency and convergence rate are also proven.
arXiv Detail & Related papers (2023-10-09T11:44:50Z) - Shrinking Class Space for Enhanced Certainty in Semi-Supervised Learning [59.44422468242455]
We propose a novel method dubbed ShrinkMatch to learn uncertain samples.
For each uncertain sample, it adaptively seeks a shrunk class space, which merely contains the original top-1 class.
We then impose a consistency regularization between a pair of strongly and weakly augmented samples in the shrunk space to strive for discriminative representations.
arXiv Detail & Related papers (2023-08-13T14:05:24Z) - Model-Based Uncertainty in Value Functions [89.31922008981735]
We focus on characterizing the variance over values induced by a distribution over MDPs.
Previous work upper bounds the posterior variance over values by solving a so-called uncertainty Bellman equation.
We propose a new uncertainty Bellman equation whose solution converges to the true posterior variance over values.
arXiv Detail & Related papers (2023-02-24T09:18:27Z) - Huber-Robust Confidence Sequences [37.16361789841549]
Confidence sequences are confidence intervals that can be sequentially tracked, and are valid at arbitrary data-dependent stopping times.
We show that the resulting confidence sequences attain the optimal width achieved in the nonsequential setting.
Since confidence sequences are a common tool used within A/B/n testing and bandits, these results open the door to sequential experimentation that is robust to outliers and adversarial corruptions.
arXiv Detail & Related papers (2023-01-23T17:29:26Z) - Contrastive learning of strong-mixing continuous-time stochastic
processes [53.82893653745542]
Contrastive learning is a family of self-supervised methods where a model is trained to solve a classification task constructed from unlabeled data.
We show that a properly constructed contrastive learning task can be used to estimate the transition kernel for small-to-mid-range intervals in the diffusion case.
arXiv Detail & Related papers (2021-03-03T23:06:47Z) - Estimating means of bounded random variables by betting [39.98103047898974]
This paper derives confidence intervals (CI) and time-uniform confidence sequences (CS) for the classical problem of estimating an unknown mean from bounded observations.
We present a general approach for deriving concentration bounds, that can be seen as a generalization and improvement of the celebrated Chernoff method.
We show how to extend these ideas to sampling without replacement, another heavily studied problem.
arXiv Detail & Related papers (2020-10-19T17:22:03Z) - Near-Optimal Confidence Sequences for Bounded Random Variables [5.901337162013615]
A fundamental problem for online inference is to provide a sequence of confidence intervals that are valid uniformly over the growing-into-infinity sample sizes.
We provide a near-optimal confidence sequence for bounded random variables by utilizing Bentkus' concentration results.
The resulting confidence sequence is confirmed to be favorable in both synthetic coverage problems and an application to adaptive stopping algorithms.
arXiv Detail & Related papers (2020-06-09T02:50:01Z) - The Simulator: Understanding Adaptive Sampling in the
Moderate-Confidence Regime [52.38455827779212]
We propose a novel technique for analyzing adaptive sampling called the em Simulator.
We prove the first instance-based lower bounds the top-k problem which incorporate the appropriate log-factors.
Our new analysis inspires a simple and near-optimal for the best-arm and top-k identification, the first em practical of its kind for the latter problem.
arXiv Detail & Related papers (2017-02-16T23:42:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.