Some compact notations for concentration inequalities and user-friendly
results
- URL: http://arxiv.org/abs/1912.13463v2
- Date: Sun, 26 Apr 2020 17:57:14 GMT
- Title: Some compact notations for concentration inequalities and user-friendly
results
- Authors: Kaizheng Wang
- Abstract summary: The new expressions describe the typical sizes and tails of random variables.
They bridge classical notations and modern non-asymptotic tail bounds together.
- Score: 2.7920304852537536
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents compact notations for concentration inequalities and
convenient results to streamline probabilistic analysis. The new expressions
describe the typical sizes and tails of random variables, allowing for simple
operations without heavy use of inessential constants. They bridge classical
asymptotic notations and modern non-asymptotic tail bounds together. Examples
of different kinds demonstrate their efficacy.
Related papers
- A covariance representation and an elementary proof of the Gaussian concentration inequality [0.0]
Via a covariance representation based on characteristic functions, a known elementary proof of the Gaussian concentration inequality is presented.
A few other applications are briefly mentioned.
arXiv Detail & Related papers (2024-10-09T14:30:55Z) - A Heavy-Tailed Algebra for Probabilistic Programming [53.32246823168763]
We propose a systematic approach for analyzing the tails of random variables.
We show how this approach can be used during the static analysis (before drawing samples) pass of a probabilistic programming language compiler.
Our empirical results confirm that inference algorithms that leverage our heavy-tailed algebra attain superior performance across a number of density modeling and variational inference tasks.
arXiv Detail & Related papers (2023-06-15T16:37:36Z) - Learning Linear Causal Representations from Interventions under General
Nonlinear Mixing [52.66151568785088]
We prove strong identifiability results given unknown single-node interventions without access to the intervention targets.
This is the first instance of causal identifiability from non-paired interventions for deep neural network embeddings.
arXiv Detail & Related papers (2023-06-04T02:32:12Z) - A Measure-Theoretic Characterization of Tight Language Models [105.16477132329416]
In some pathological cases, probability mass can leak'' onto the set of infinite sequences.
This paper offers a measure-theoretic treatment of language modeling.
We prove that many popular language model families are in fact tight, meaning that they will not leak in this sense.
arXiv Detail & Related papers (2022-12-20T18:17:11Z) - Instance-Dependent Generalization Bounds via Optimal Transport [51.71650746285469]
Existing generalization bounds fail to explain crucial factors that drive the generalization of modern neural networks.
We derive instance-dependent generalization bounds that depend on the local Lipschitz regularity of the learned prediction function in the data space.
We empirically analyze our generalization bounds for neural networks, showing that the bound values are meaningful and capture the effect of popular regularization methods during training.
arXiv Detail & Related papers (2022-11-02T16:39:42Z) - Correlation between Alignment-Uniformity and Performance of Dense
Contrastive Representations [11.266613717084788]
We analyze the theoretical ideas of dense contrastive learning using a standard CNN and straightforward feature matching scheme.
We discover the core principle in constructing a positive pair of dense features and empirically proved its validity.
Also, we introduce a new scalar metric that summarizes the correlation between alignment-and-uniformity and downstream performance.
arXiv Detail & Related papers (2022-10-17T08:08:37Z) - On the Normalizing Constant of the Continuous Categorical Distribution [24.015934908123928]
A novel family of such distributions has been discovered: the continuous categorical.
In spite of this mathematical simplicity, our understanding of the normalizing constant remains far from complete.
We present theoretical and methodological advances that can, in turn, help to enable broader applications of the continuous categorical distribution.
arXiv Detail & Related papers (2022-04-28T05:06:12Z) - Generic aspects of the resource theory of quantum coherence [0.0]
We prove that if two $n$-dimensional pure states are chosen independently according to the natural uniform distribution, then the probability that they are comparable as $nrightarrowinfty$.
We also study the maximal success probability of incoherent conversions and find an explicit formula for its large-$n$ distribution.
arXiv Detail & Related papers (2020-10-13T16:38:52Z) - Learning Probabilistic Sentence Representations from Paraphrases [47.528336088976744]
We define probabilistic models that produce distributions for sentences.
We train our models on paraphrases and demonstrate that they naturally capture sentence specificity.
Our model captures sentential entailment and provides ways to analyze the specificity and preciseness of individual words.
arXiv Detail & Related papers (2020-05-16T21:10:28Z) - Convex Representation Learning for Generalized Invariance in
Semi-Inner-Product Space [32.442549424823355]
In this work we develop an algorithm for a variety of generalized representations in a semi-norms that representers in a lead, and bounds are established.
This allows in representations to be learned efficiently and effectively as confirmed in our experiments along with accurate predictions.
arXiv Detail & Related papers (2020-04-25T18:54:37Z) - Asymptotic Analysis of an Ensemble of Randomly Projected Linear
Discriminants [94.46276668068327]
In [1], an ensemble of randomly projected linear discriminants is used to classify datasets.
We develop a consistent estimator of the misclassification probability as an alternative to the computationally-costly cross-validation estimator.
We also demonstrate the use of our estimator for tuning the projection dimension on both real and synthetic data.
arXiv Detail & Related papers (2020-04-17T12:47:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.