Concentration inequality using unconfirmed knowledge
- URL: http://arxiv.org/abs/2002.04357v2
- Date: Thu, 20 Feb 2020 09:21:48 GMT
- Title: Concentration inequality using unconfirmed knowledge
- Authors: Go Kato
- Abstract summary: We give a concentration inequality based on the premise that random variables take values within a particular region.
Our inequality outperforms other well-known inequalities.
- Score: 2.538209532048867
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We give a concentration inequality based on the premise that random variables
take values within a particular region. The concentration inequality guarantees
that, for any sequence of correlated random variables, the difference between
the sum of conditional expectations and that of the observed values takes a
small value with high probability when the expected values are evaluated under
the condition that the past values are known. Our inequality outperforms other
well-known inequalities, e.g. the Azuma-Hoeffding inequality, especially in
terms of the convergence speed when the random variables are highly biased.
This high performance of our inequality is provided by the key idea in which we
predict some parameters and adopt the predicted values in the inequality.
Related papers
- Fairness-Accuracy Trade-Offs: A Causal Perspective [58.06306331390586]
We analyze the tension between fairness and accuracy from a causal lens for the first time.
We show that enforcing a causal constraint often reduces the disparity between demographic groups.
We introduce a new neural approach for causally-constrained fair learning.
arXiv Detail & Related papers (2024-05-24T11:19:52Z) - What Hides behind Unfairness? Exploring Dynamics Fairness in Reinforcement Learning [52.51430732904994]
In reinforcement learning problems, agents must consider long-term fairness while maximizing returns.
Recent works have proposed many different types of fairness notions, but how unfairness arises in RL problems remains unclear.
We introduce a novel notion called dynamics fairness, which explicitly captures the inequality stemming from environmental dynamics.
arXiv Detail & Related papers (2024-04-16T22:47:59Z) - Ranking a Set of Objects using Heterogeneous Workers: QUITE an Easy
Problem [54.90613714264689]
We focus on the problem of ranking $N$ objects starting from a set of noisy pairwise comparisons provided by a crowd of unequal workers.
We propose QUITE, a non-adaptive ranking algorithm that jointly estimates workers' reliabilities and qualities of objects.
arXiv Detail & Related papers (2023-10-03T12:42:13Z) - Monotonicity and Double Descent in Uncertainty Estimation with Gaussian
Processes [52.92110730286403]
It is commonly believed that the marginal likelihood should be reminiscent of cross-validation metrics and that both should deteriorate with larger input dimensions.
We prove that by tuning hyper parameters, the performance, as measured by the marginal likelihood, improves monotonically with the input dimension.
We also prove that cross-validation metrics exhibit qualitatively different behavior that is characteristic of double descent.
arXiv Detail & Related papers (2022-10-14T08:09:33Z) - Split-kl and PAC-Bayes-split-kl Inequalities [15.63537071742102]
We name a split-kl inequality, which combines the power of the kl inequality with the ability to exploit low variance.
For Bernoulli random variables the kl inequality is tighter than the Empirical Bernstein, for random variables taking values inside a bounded interval the Empirical Bernstein inequality is tighter than the kl.
We discuss an application of the split-kl inequality to bounding excess losses.
arXiv Detail & Related papers (2022-06-01T18:42:02Z) - Dense Uncertainty Estimation via an Ensemble-based Conditional Latent
Variable Model [68.34559610536614]
We argue that the aleatoric uncertainty is an inherent attribute of the data and can only be correctly estimated with an unbiased oracle model.
We propose a new sampling and selection strategy at train time to approximate the oracle model for aleatoric uncertainty estimation.
Our results show that our solution achieves both accurate deterministic results and reliable uncertainty estimation.
arXiv Detail & Related papers (2021-11-22T08:54:10Z) - A Reverse Jensen Inequality Result with Application to Mutual
Information Estimation [27.35611916229265]
In a probabilistic setting, the Jensen inequality describes the relationship between a convex function and the expected value.
We show that under minimal constraints and with a proper scaling, the Jensen inequality can be reversed.
arXiv Detail & Related papers (2021-11-12T11:54:17Z) - The Bell inequality, inviolable by data used consistently with its
derivation, leads to quantum correlations that satisfy it, and probabilities
that satisfy the Wigner inequality [0.0]
The inequality that Bell derived using three random variables must be identically satisfied by any three corresponding data sets of plus and minus 1s.
For laboratory data, the inequality is identically satisfied as a fact of pure algebra.
If predicted correlations violate the inequality, they represent no three cross correlated data sets that experimentally exist or can be generated from valid probability models.
arXiv Detail & Related papers (2021-02-25T23:30:11Z) - New-Type Hoeffding's Inequalities and Application in Tail Bounds [17.714164324169037]
We present a new type of Hoeffding's inequalities, where the high order moments of random variables are taken into account.
It can get some considerable improvements in the tail bounds evaluation compared with the known results.
arXiv Detail & Related papers (2021-01-02T03:19:11Z) - Concentration Inequalities for Statistical Inference [3.236217153362305]
This paper gives a review of concentration inequalities which are widely employed in non-asymptotical analyses of mathematical statistics.
We aim to illustrate the concentration inequalities with known constants and to improve existing bounds with sharper constants.
arXiv Detail & Related papers (2020-11-04T12:54:06Z) - On conditional versus marginal bias in multi-armed bandits [105.07190334523304]
The bias of the sample means of the arms in multi-armed bandits is an important issue in adaptive data analysis.
We characterize the sign of the conditional bias of monotone functions of the rewards, including the sample mean.
Our results hold for arbitrary conditioning events and leverage natural monotonicity properties of the data collection policy.
arXiv Detail & Related papers (2020-02-19T20:16:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.