Diagonal Nonlinear Transformations Preserve Structure in Covariance and
Precision Matrices
- URL: http://arxiv.org/abs/2107.04136v1
- Date: Thu, 8 Jul 2021 22:31:48 GMT
- Title: Diagonal Nonlinear Transformations Preserve Structure in Covariance and
Precision Matrices
- Authors: Rebecca E Morrison, Ricardo Baptista, Estelle L Basor
- Abstract summary: For a certain class of non-Gaussian distributions, correspondences still hold, exactly for the covariance and approximately for the precision.
The distributions -- sometimes referred to as "nonparanormal" -- are given by diagonal transformations of multivariate normal random variables.
- Score: 3.652509571098291
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: For a multivariate normal distribution, the sparsity of the covariance and
precision matrices encodes complete information about independence and
conditional independence properties. For general distributions, the covariance
and precision matrices reveal correlations and so-called partial correlations
between variables, but these do not, in general, have any correspondence with
respect to independence properties. In this paper, we prove that, for a certain
class of non-Gaussian distributions, these correspondences still hold, exactly
for the covariance and approximately for the precision. The distributions --
sometimes referred to as "nonparanormal" -- are given by diagonal
transformations of multivariate normal random variables. We provide several
analytic and numerical examples illustrating these results.
Related papers
- Unsupervised Representation Learning from Sparse Transformation Analysis [79.94858534887801]
We propose to learn representations from sequence data by factorizing the transformations of the latent variables into sparse components.
Input data are first encoded as distributions of latent activations and subsequently transformed using a probability flow model.
arXiv Detail & Related papers (2024-10-07T23:53:25Z) - Decomposing Gaussians with Unknown Covariance [3.734088413551237]
We present a general algorithm that encompasses all previous decomposition approaches for Gaussian data as special cases.
It yields a new and more flexible alternative to sample splitting when $n>1$.
We apply these decompositions to the tasks of model selection and post-selection inference in settings where alternative strategies are unavailable.
arXiv Detail & Related papers (2024-09-17T18:56:08Z) - A class of 2 X 2 correlated random-matrix models with Brody spacing distribution [0.0]
A class of 2 X 2 random-matrix models is introduced for which the Brody distribution is the eigenvalue spacing distribution.
The random matrices introduced here differ from those of the Gaussian Orthogonal Ensemble (GOE) in three important ways.
arXiv Detail & Related papers (2023-08-03T03:11:54Z) - Generalized Precision Matrix for Scalable Estimation of Nonparametric
Markov Networks [11.77890309304632]
A Markov network characterizes the conditional independence structure, or Markov property, among a set of random variables.
In this work, we characterize the conditional independence structure in general distributions for all data types.
We also allow general functional relations among variables, thus giving rise to a Markov network structure learning algorithm.
arXiv Detail & Related papers (2023-05-19T01:53:10Z) - Quantitative deterministic equivalent of sample covariance matrices with
a general dependence structure [0.0]
We prove quantitative bounds involving both the dimensions and the spectral parameter, in particular allowing it to get closer to the real positive semi-line.
As applications, we obtain a new bound for the convergence in Kolmogorov distance of the empirical spectral distributions of these general models.
arXiv Detail & Related papers (2022-11-23T15:50:31Z) - Equivariant Disentangled Transformation for Domain Generalization under
Combination Shift [91.38796390449504]
Combinations of domains and labels are not observed during training but appear in the test environment.
We provide a unique formulation of the combination shift problem based on the concepts of homomorphism, equivariance, and a refined definition of disentanglement.
arXiv Detail & Related papers (2022-08-03T12:31:31Z) - On the Strong Correlation Between Model Invariance and Generalization [54.812786542023325]
Generalization captures a model's ability to classify unseen data.
Invariance measures consistency of model predictions on transformations of the data.
From a dataset-centric view, we find a certain model's accuracy and invariance linearly correlated on different test sets.
arXiv Detail & Related papers (2022-07-14T17:08:25Z) - Equivariance Discovery by Learned Parameter-Sharing [153.41877129746223]
We study how to discover interpretable equivariances from data.
Specifically, we formulate this discovery process as an optimization problem over a model's parameter-sharing schemes.
Also, we theoretically analyze the method for Gaussian data and provide a bound on the mean squared gap between the studied discovery scheme and the oracle scheme.
arXiv Detail & Related papers (2022-04-07T17:59:19Z) - Statistical Analysis from the Fourier Integral Theorem [9.619814126465206]
We look at Monte Carlo based estimators of conditional distribution functions.
We study a number of problems, such as prediction for Markov processes.
Estimators are explicit Monte Carlo based and require no iterative algorithms.
arXiv Detail & Related papers (2021-06-11T20:44:54Z) - Benign Overfitting of Constant-Stepsize SGD for Linear Regression [122.70478935214128]
inductive biases are central in preventing overfitting empirically.
This work considers this issue in arguably the most basic setting: constant-stepsize SGD for linear regression.
We reflect on a number of notable differences between the algorithmic regularization afforded by (unregularized) SGD in comparison to ordinary least squares.
arXiv Detail & Related papers (2021-03-23T17:15:53Z) - Generalizing Convolutional Neural Networks for Equivariance to Lie
Groups on Arbitrary Continuous Data [52.78581260260455]
We propose a general method to construct a convolutional layer that is equivariant to transformations from any specified Lie group.
We apply the same model architecture to images, ball-and-stick molecular data, and Hamiltonian dynamical systems.
arXiv Detail & Related papers (2020-02-25T17:40:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.