Learning Large Causal Structures from Inverse Covariance Matrix via
Sparse Matrix Decomposition
- URL: http://arxiv.org/abs/2211.14221v3
- Date: Mon, 19 Feb 2024 21:05:31 GMT
- Title: Learning Large Causal Structures from Inverse Covariance Matrix via
Sparse Matrix Decomposition
- Authors: Shuyu Dong, Kento Uemura, Akito Fujii, Shuang Chang, Yusuke Koyanagi,
Koji Maruhashi, Mich\`ele Sebag
- Abstract summary: This paper focuses on learning causal structures from the inverse covariance matrix.
The proposed method, called ICID, is based on continuous optimization of a matrix decomposition model.
We show that ICID efficiently identifies the sought directed acyclic graph (DAG) assuming the knowledge of noise variances.
- Score: 2.403264213118039
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning causal structures from observational data is a fundamental problem
facing important computational challenges when the number of variables is
large. In the context of linear structural equation models (SEMs), this paper
focuses on learning causal structures from the inverse covariance matrix. The
proposed method, called ICID for Independence-preserving Decomposition from
Inverse Covariance matrix, is based on continuous optimization of a matrix
decomposition model that preserves the nonzero patterns of the inverse
covariance matrix. Through theoretical and empirical evidences, we show that
ICID efficiently identifies the sought directed acyclic graph (DAG) assuming
the knowledge of noise variances. Moreover, ICID is shown empirically to be
robust under bounded misspecification of noise variances in the case where the
noise variances are non-equal. The proposed method enjoys a low complexity, as
reflected by its time efficiency in the experiments, and also enables a novel
regularization scheme that yields highly accurate solutions on the Simulated
fMRI data (Smith et al., 2011) in comparison with state-of-the-art algorithms.
Related papers
- Induced Covariance for Causal Discovery in Linear Sparse Structures [55.2480439325792]
Causal models seek to unravel the cause-effect relationships among variables from observed data.
This paper introduces a novel causal discovery algorithm designed for settings in which variables exhibit linearly sparse relationships.
arXiv Detail & Related papers (2024-10-02T04:01:38Z) - Recovering Linear Causal Models with Latent Variables via Cholesky
Factorization of Covariance Matrix [21.698480201955213]
We propose a DAG structure recovering algorithm, which is based on the Cholesky factorization of the covariance matrix of the observed data.
On synthetic and real-world datasets, the algorithm is significantly faster than previous methods and achieves the state-of-the-art performance.
arXiv Detail & Related papers (2023-11-01T17:27:49Z) - Recovering Simultaneously Structured Data via Non-Convex Iteratively
Reweighted Least Squares [0.8702432681310401]
We propose a new algorithm for recovering data that adheres to multiple, heterogeneous low-dimensional structures from linear observations.
We show that the IRLS method favorable in identifying low/comckuele state measurements.
arXiv Detail & Related papers (2023-06-08T06:35:47Z) - Classification of BCI-EEG based on augmented covariance matrix [0.0]
We propose a new framework based on the augmented covariance extracted from an autoregressive model to improve motor imagery classification.
We will test our approach on several datasets and several subjects using the MOABB framework.
arXiv Detail & Related papers (2023-02-09T09:04:25Z) - Learning Graphical Factor Models with Riemannian Optimization [70.13748170371889]
This paper proposes a flexible algorithmic framework for graph learning under low-rank structural constraints.
The problem is expressed as penalized maximum likelihood estimation of an elliptical distribution.
We leverage geometries of positive definite matrices and positive semi-definite matrices of fixed rank that are well suited to elliptical models.
arXiv Detail & Related papers (2022-10-21T13:19:45Z) - Benign Overfitting of Constant-Stepsize SGD for Linear Regression [122.70478935214128]
inductive biases are central in preventing overfitting empirically.
This work considers this issue in arguably the most basic setting: constant-stepsize SGD for linear regression.
We reflect on a number of notable differences between the algorithmic regularization afforded by (unregularized) SGD in comparison to ordinary least squares.
arXiv Detail & Related papers (2021-03-23T17:15:53Z) - Sparse PCA via $l_{2,p}$-Norm Regularization for Unsupervised Feature
Selection [138.97647716793333]
We propose a simple and efficient unsupervised feature selection method, by combining reconstruction error with $l_2,p$-norm regularization.
We present an efficient optimization algorithm to solve the proposed unsupervised model, and analyse the convergence and computational complexity of the algorithm theoretically.
arXiv Detail & Related papers (2020-12-29T04:08:38Z) - Learning Mixtures of Low-Rank Models [89.39877968115833]
We study the problem of learning computational mixtures of low-rank models.
We develop an algorithm that is guaranteed to recover the unknown matrices with near-optimal sample.
In addition, the proposed algorithm is provably stable against random noise.
arXiv Detail & Related papers (2020-09-23T17:53:48Z) - Understanding Implicit Regularization in Over-Parameterized Single Index
Model [55.41685740015095]
We design regularization-free algorithms for the high-dimensional single index model.
We provide theoretical guarantees for the induced implicit regularization phenomenon.
arXiv Detail & Related papers (2020-07-16T13:27:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.