Noise-cleaning the precision matrix of fMRI time series
- URL: http://arxiv.org/abs/2302.02951v1
- Date: Mon, 6 Feb 2023 17:32:17 GMT
- Title: Noise-cleaning the precision matrix of fMRI time series
- Authors: Miguel Ib\'a\~nez-Berganza, Carlo Lucibello, Francesca Santucci,
Tommaso Gili, Andrea Gabrielli
- Abstract summary: We consider several standard noise-cleaning algorithms and compare them on two types of datasets.
The reliability of each algorithm is assessed in terms of test-set likelihood and, in the case of synthetic data, of the distance from the true precision matrix.
We propose a variant of the Optimal Rotationally Invariant Estimator in which one of its parameters is optimised by cross-validation.
- Score: 2.6399022396257794
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a comparison between various algorithms of inference of covariance
and precision matrices in small datasets of real vectors, of the typical length
and dimension of human brain activity time series retrieved by functional
Magnetic Resonance Imaging (fMRI). Assuming a Gaussian model underlying the
neural activity, the problem consists in denoising the empirically observed
matrices in order to obtain a better estimator of the true precision and
covariance matrices. We consider several standard noise-cleaning algorithms and
compare them on two types of datasets. The first type are time series of fMRI
brain activity of human subjects at rest. The second type are synthetic time
series sampled from a generative Gaussian model of which we can vary the
fraction of dimensions per sample q = N/T and the strength of off-diagonal
correlations. The reliability of each algorithm is assessed in terms of
test-set likelihood and, in the case of synthetic data, of the distance from
the true precision matrix. We observe that the so called Optimal Rotationally
Invariant Estimator, based on Random Matrix Theory, leads to a significantly
lower distance from the true precision matrix in synthetic data, and higher
test likelihood in natural fMRI data. We propose a variant of the Optimal
Rotationally Invariant Estimator in which one of its parameters is optimised by
cross-validation. In the severe undersampling regime (large q) typical of fMRI
series, it outperforms all the other estimators. We furthermore propose a
simple algorithm based on an iterative likelihood gradient ascent, providing an
accurate estimation for weakly correlated datasets.
Related papers
- Information limits and Thouless-Anderson-Palmer equations for spiked matrix models with structured noise [19.496063739638924]
We consider a saturate problem of Bayesian inference for a structured spiked model.
We show how to predict the statistical limits using an efficient algorithm inspired by the theory of adaptive Thouless-Anderson-Palmer equations.
arXiv Detail & Related papers (2024-05-31T16:38:35Z) - Stochastic Optimization for Non-convex Problem with Inexact Hessian
Matrix, Gradient, and Function [99.31457740916815]
Trust-region (TR) and adaptive regularization using cubics have proven to have some very appealing theoretical properties.
We show that TR and ARC methods can simultaneously provide inexact computations of the Hessian, gradient, and function values.
arXiv Detail & Related papers (2023-10-18T10:29:58Z) - Learning Unnormalized Statistical Models via Compositional Optimization [73.30514599338407]
Noise-contrastive estimation(NCE) has been proposed by formulating the objective as the logistic loss of the real data and the artificial noise.
In this paper, we study it a direct approach for optimizing the negative log-likelihood of unnormalized models.
arXiv Detail & Related papers (2023-06-13T01:18:16Z) - Probabilistic Unrolling: Scalable, Inverse-Free Maximum Likelihood
Estimation for Latent Gaussian Models [69.22568644711113]
We introduce probabilistic unrolling, a method that combines Monte Carlo sampling with iterative linear solvers to circumvent matrix inversions.
Our theoretical analyses reveal that unrolling and backpropagation through the iterations of the solver can accelerate gradient estimation for maximum likelihood estimation.
In experiments on simulated and real data, we demonstrate that probabilistic unrolling learns latent Gaussian models up to an order of magnitude faster than gradient EM, with minimal losses in model performance.
arXiv Detail & Related papers (2023-06-05T21:08:34Z) - Alternating Mahalanobis Distance Minimization for Stable and Accurate CP
Decomposition [4.847980206213335]
We introduce a new formulation for deriving singular values and vectors of a tensor by considering the critical points of a function different from what is used in the previous work.
We show that a subsweep of this algorithm can achieve a superlinear convergence rate for exact CPD with known rank.
We then view the algorithm as optimizing a Mahalanobis distance with respect to each factor with ground metric dependent on the other factors.
arXiv Detail & Related papers (2022-04-14T19:56:36Z) - Test Set Sizing Via Random Matrix Theory [91.3755431537592]
This paper uses techniques from Random Matrix Theory to find the ideal training-testing data split for a simple linear regression.
It defines "ideal" as satisfying the integrity metric, i.e. the empirical model error is the actual measurement noise.
This paper is the first to solve for the training and test size for any model in a way that is truly optimal.
arXiv Detail & Related papers (2021-12-11T13:18:33Z) - Sparse PCA via $l_{2,p}$-Norm Regularization for Unsupervised Feature
Selection [138.97647716793333]
We propose a simple and efficient unsupervised feature selection method, by combining reconstruction error with $l_2,p$-norm regularization.
We present an efficient optimization algorithm to solve the proposed unsupervised model, and analyse the convergence and computational complexity of the algorithm theoretically.
arXiv Detail & Related papers (2020-12-29T04:08:38Z) - A Robust Matching Pursuit Algorithm Using Information Theoretic Learning [37.968665739578185]
A new OMP algorithm is developed based on the information theoretic learning (ITL)
The experimental results on both simulated and real-world data consistently demonstrate the superiority of the proposed OMP algorithm in data recovery, image reconstruction, and classification.
arXiv Detail & Related papers (2020-05-10T01:36:00Z) - Semi-analytic approximate stability selection for correlated data in
generalized linear models [3.42658286826597]
We propose a novel approximate inference algorithm that can conduct Stability Selection without the repeated fitting.
The algorithm is based on the replica method of statistical mechanics and vector approximate message passing of information theory.
Numerical experiments indicate that the algorithm exhibits fast convergence and high approximation accuracy for both synthetic and real-world data.
arXiv Detail & Related papers (2020-03-19T10:43:12Z) - Optimal Iterative Sketching with the Subsampled Randomized Hadamard
Transform [64.90148466525754]
We study the performance of iterative sketching for least-squares problems.
We show that the convergence rate for Haar and randomized Hadamard matrices are identical, andally improve upon random projections.
These techniques may be applied to other algorithms that employ randomized dimension reduction.
arXiv Detail & Related papers (2020-02-03T16:17:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.