New-Type Hoeffding's Inequalities and Application in Tail Bounds
- URL: http://arxiv.org/abs/2101.00360v1
- Date: Sat, 2 Jan 2021 03:19:11 GMT
- Title: New-Type Hoeffding's Inequalities and Application in Tail Bounds
- Authors: Pingyi Fan
- Abstract summary: We present a new type of Hoeffding's inequalities, where the high order moments of random variables are taken into account.
It can get some considerable improvements in the tail bounds evaluation compared with the known results.
- Score: 17.714164324169037
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: It is well known that Hoeffding's inequality has a lot of applications in the
signal and information processing fields. How to improve Hoeffding's inequality
and find the refinements of its applications have always attracted much
attentions. An improvement of Hoeffding inequality was recently given by Hertz
\cite{r1}. Eventhough such an improvement is not so big, it still can be used
to update many known results with original Hoeffding's inequality, especially
for Hoeffding-Azuma inequality for martingales. However, the results in
original Hoeffding's inequality and its refinement one by Hertz only considered
the first order moment of random variables. In this paper, we present a new
type of Hoeffding's inequalities, where the high order moments of random
variables are taken into account. It can get some considerable improvements in
the tail bounds evaluation compared with the known results. It is expected that
the developed new type Hoeffding's inequalities could get more interesting
applications in some related fields that use Hoeffding's results.
Related papers
- What Hides behind Unfairness? Exploring Dynamics Fairness in Reinforcement Learning [52.51430732904994]
In reinforcement learning problems, agents must consider long-term fairness while maximizing returns.
Recent works have proposed many different types of fairness notions, but how unfairness arises in RL problems remains unclear.
We introduce a novel notion called dynamics fairness, which explicitly captures the inequality stemming from environmental dynamics.
arXiv Detail & Related papers (2024-04-16T22:47:59Z) - Closing the Gap Between the Upper Bound and the Lower Bound of Adam's
Iteration Complexity [51.96093077151991]
We derive a new convergence guarantee of Adam, with only an $L$-smooth condition and a bounded noise variance assumption.
Our proof utilizes novel techniques to handle the entanglement between momentum and adaptive learning rate.
arXiv Detail & Related papers (2023-10-27T09:16:58Z) - Lower Bounds on the Bayesian Risk via Information Measures [17.698319441265223]
We show that one can lower bound the risk with any information measure by upper bounding its dual via Markov's inequality.
The behaviour of the lower bound in the number of samples is influenced by the choice of the information measure.
If the observations are subject to privatisation, stronger impossibility results can be obtained via Strong Data-Processing Inequalities.
arXiv Detail & Related papers (2023-03-22T12:09:12Z) - Exact Non-Oblivious Performance of Rademacher Random Embeddings [79.28094304325116]
This paper revisits the performance of Rademacher random projections.
It establishes novel statistical guarantees that are numerically sharp and non-oblivious with respect to the input data.
arXiv Detail & Related papers (2023-03-21T11:45:27Z) - High-Probability Bounds for Stochastic Optimization and Variational
Inequalities: the Case of Unbounded Variance [59.211456992422136]
We propose algorithms with high-probability convergence results under less restrictive assumptions.
These results justify the usage of the considered methods for solving problems that do not fit standard functional classes in optimization.
arXiv Detail & Related papers (2023-02-02T10:37:23Z) - Clipped Stochastic Methods for Variational Inequalities with
Heavy-Tailed Noise [64.85879194013407]
We prove the first high-probability results with logarithmic dependence on the confidence level for methods for solving monotone and structured non-monotone VIPs.
Our results match the best-known ones in the light-tails case and are novel for structured non-monotone problems.
In addition, we numerically validate that the gradient noise of many practical formulations is heavy-tailed and show that clipping improves the performance of SEG/SGDA.
arXiv Detail & Related papers (2022-06-02T15:21:55Z) - Tight Last-Iterate Convergence of the Extragradient Method for
Constrained Monotone Variational Inequalities [4.6193503399184275]
We show the last-iterate convergence rate of the extragradient method for monotone and Lipschitz variational inequalities with constraints.
We develop a new approach that combines the power of the sum-of-squares programming with the low dimensionality of the update rule of the extragradient method.
arXiv Detail & Related papers (2022-04-20T05:12:11Z) - A Reverse Jensen Inequality Result with Application to Mutual
Information Estimation [27.35611916229265]
In a probabilistic setting, the Jensen inequality describes the relationship between a convex function and the expected value.
We show that under minimal constraints and with a proper scaling, the Jensen inequality can be reversed.
arXiv Detail & Related papers (2021-11-12T11:54:17Z) - Sharp Bounds for Federated Averaging (Local SGD) and Continuous
Perspective [49.17352150219212]
Federated AveragingFedAvg, also known as Local SGD, is one of the most popular algorithms in Federated Learning (FL)
We show how to analyze this quantity from the Differential Equation (SDE) perspective.
arXiv Detail & Related papers (2021-11-05T22:16:11Z) - Generalized Talagrand Inequality for Sinkhorn Distance using Entropy
Power Inequality [28.676190269627828]
We prove an HWI-type inequality making use of the infinitesimal displacement convexity of optimal transport map.
We derive two Talagrand-type inequalities using the saturation of EPI that corresponds to a numerical term in our expression.
arXiv Detail & Related papers (2021-09-17T09:44:27Z) - Concentration inequality using unconfirmed knowledge [2.538209532048867]
We give a concentration inequality based on the premise that random variables take values within a particular region.
Our inequality outperforms other well-known inequalities.
arXiv Detail & Related papers (2020-02-11T13:02:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.