A note on the smallest eigenvalue of the empirical covariance of causal
Gaussian processes
- URL: http://arxiv.org/abs/2212.09508v2
- Date: Fri, 27 Oct 2023 22:48:58 GMT
- Title: A note on the smallest eigenvalue of the empirical covariance of causal
Gaussian processes
- Authors: Ingvar Ziemann
- Abstract summary: We present a proof for bounding the smallest eigenvalue of the empirical covariance in a causal Gaussian process.
We establish a one-sided tail inequality for Gaussian quadratic forms using a causal decomposition.
- Score: 1.223779595809275
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a simple proof for bounding the smallest eigenvalue of the
empirical covariance in a causal Gaussian process. Along the way, we establish
a one-sided tail inequality for Gaussian quadratic forms using a causal
decomposition. Our proof only uses elementary facts about the Gaussian
distribution and the union bound. We conclude with an example in which we
provide a performance guarantee for least squares identification of a vector
autoregression.
Related papers
- Theory on Score-Mismatched Diffusion Models and Zero-Shot Conditional Samplers [49.97755400231656]
We present the first performance guarantee with explicit dimensional general score-mismatched diffusion samplers.
We show that score mismatches result in an distributional bias between the target and sampling distributions, proportional to the accumulated mismatch between the target and training distributions.
This result can be directly applied to zero-shot conditional samplers for any conditional model, irrespective of measurement noise.
arXiv Detail & Related papers (2024-10-17T16:42:12Z) - A covariance representation and an elementary proof of the Gaussian concentration inequality [0.0]
Via a covariance representation based on characteristic functions, a known elementary proof of the Gaussian concentration inequality is presented.
A few other applications are briefly mentioned.
arXiv Detail & Related papers (2024-10-09T14:30:55Z) - Entrywise error bounds for low-rank approximations of kernel matrices [55.524284152242096]
We derive entrywise error bounds for low-rank approximations of kernel matrices obtained using the truncated eigen-decomposition.
A key technical innovation is a delocalisation result for the eigenvectors of the kernel matrix corresponding to small eigenvalues.
We validate our theory with an empirical study of a collection of synthetic and real-world datasets.
arXiv Detail & Related papers (2024-05-23T12:26:25Z) - Statistically Optimal Generative Modeling with Maximum Deviation from the Empirical Distribution [2.1146241717926664]
We show that the Wasserstein GAN, constrained to left-invertible push-forward maps, generates distributions that avoid replication and significantly deviate from the empirical distribution.
Our most important contribution provides a finite-sample lower bound on the Wasserstein-1 distance between the generative distribution and the empirical one.
We also establish a finite-sample upper bound on the distance between the generative distribution and the true data-generating one.
arXiv Detail & Related papers (2023-07-31T06:11:57Z) - On counterfactual inference with unobserved confounding [36.18241676876348]
Given an observational study with $n$ independent but heterogeneous units, our goal is to learn the counterfactual distribution for each unit.
We introduce a convex objective that pools all $n$ samples to jointly learn all $n$ parameter vectors.
We derive sufficient conditions for compactly supported distributions to satisfy the logarithmic Sobolev inequality.
arXiv Detail & Related papers (2022-11-14T04:14:37Z) - Robust Estimation for Nonparametric Families via Generative Adversarial
Networks [92.64483100338724]
We provide a framework for designing Generative Adversarial Networks (GANs) to solve high dimensional robust statistics problems.
Our work extend these to robust mean estimation, second moment estimation, and robust linear regression.
In terms of techniques, our proposed GAN losses can be viewed as a smoothed and generalized Kolmogorov-Smirnov distance.
arXiv Detail & Related papers (2022-02-02T20:11:33Z) - Nearest neighbor empirical processes [7.034466417392574]
An empirical measure based on the responses from the nearest neighbors to a given point $x$ is introduced and studied as a central statistical quantity.
A uniform non-asymptotic bound is established under a well-known condition, often referred to as Vapnik-Chervonenkis, on the uniform entropy numbers.
This suggests the possibility of using standard formulas to estimate the variance by using only the nearest neighbors instead of the full data.
arXiv Detail & Related papers (2021-10-27T08:15:20Z) - Learning Sparse Graph with Minimax Concave Penalty under Gaussian Markov
Random Fields [51.07460861448716]
This paper presents a convex-analytic framework to learn from data.
We show that a triangular convexity decomposition is guaranteed by a transform of the corresponding to its upper part.
arXiv Detail & Related papers (2021-09-17T17:46:12Z) - Concentration inequality for U-statistics of order two for uniformly
ergodic Markov chains [0.0]
We prove a concentration inequality for U-statistics of order two for uniformly ergodic Markov chains.
We show that we can recover the convergence rate of Arcones and Gin'e who proved a concentration result for U-statistics of independent random variables and canonical kernels.
arXiv Detail & Related papers (2020-11-20T15:14:34Z) - Causal Expectation-Maximisation [70.45873402967297]
We show that causal inference is NP-hard even in models characterised by polytree-shaped graphs.
We introduce the causal EM algorithm to reconstruct the uncertainty about the latent variables from data about categorical manifest variables.
We argue that there appears to be an unnoticed limitation to the trending idea that counterfactual bounds can often be computed without knowledge of the structural equations.
arXiv Detail & Related papers (2020-11-04T10:25:13Z) - Good Classifiers are Abundant in the Interpolating Regime [64.72044662855612]
We develop a methodology to compute precisely the full distribution of test errors among interpolating classifiers.
We find that test errors tend to concentrate around a small typical value $varepsilon*$, which deviates substantially from the test error of worst-case interpolating model.
Our results show that the usual style of analysis in statistical learning theory may not be fine-grained enough to capture the good generalization performance observed in practice.
arXiv Detail & Related papers (2020-06-22T21:12:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.