Thresholded Graphical Lasso Adjusts for Latent Variables: Application to
Functional Neural Connectivity
- URL: http://arxiv.org/abs/2104.06389v1
- Date: Tue, 13 Apr 2021 17:50:26 GMT
- Title: Thresholded Graphical Lasso Adjusts for Latent Variables: Application to
Functional Neural Connectivity
- Authors: Minjie Wang, Genevera I. Allen
- Abstract summary: We propose a hard thresholding operator to address the problem of graph selection in the presence of latent variables.
We show that thresholding the graphical Lasso, neighborhood selection, or CLIME estimators have superior theoretical properties in terms of graph selection consistency.
We also demonstrate the applicability of our approach through a neuroscience case study on calcium-imaging data to estimate functional neural connections.
- Score: 1.52292571922932
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In neuroscience, researchers seek to uncover the connectivity of neurons from
large-scale neural recordings or imaging; often people employ graphical model
selection and estimation techniques for this purpose. But, existing
technologies can only record from a small subset of neurons leading to a
challenging problem of graph selection in the presence of extensive latent
variables. Chandrasekaran et al. (2012) proposed a convex program to address
this problem that poses challenges from both a computational and statistical
perspective. To solve this problem, we propose an incredibly simple solution:
apply a hard thresholding operator to existing graph selection methods.
Conceptually simple and computationally attractive, we demonstrate that
thresholding the graphical Lasso, neighborhood selection, or CLIME estimators
have superior theoretical properties in terms of graph selection consistency as
well as stronger empirical results than existing approaches for the latent
variable graphical model problem. We also demonstrate the applicability of our
approach through a neuroscience case study on calcium-imaging data to estimate
functional neural connections.
Related papers
- Graph Neural Networks for Brain Graph Learning: A Survey [53.74244221027981]
Graph neural networks (GNNs) have demonstrated a significant advantage in mining graph-structured data.
GNNs to learn brain graph representations for brain disorder analysis has recently gained increasing attention.
In this paper, we aim to bridge this gap by reviewing brain graph learning works that utilize GNNs.
arXiv Detail & Related papers (2024-06-01T02:47:39Z) - NeuroGraph: Benchmarks for Graph Machine Learning in Brain Connectomics [9.803179588247252]
We introduce NeuroGraph, a collection of graph-based neuroimaging datasets.
We demonstrate its utility for predicting multiple categories of behavioral and cognitive traits.
arXiv Detail & Related papers (2023-06-09T19:10:16Z) - Nonparanormal Graph Quilting with Applications to Calcium Imaging [1.1470070927586016]
We study two approaches for nonparanormal Graph Quilting based on the Gaussian copula graphical model.
Our approaches yield more scientifically meaningful functional connectivity estimates compared to existing Gaussian graph quilting methods for this calcium imaging data set.
arXiv Detail & Related papers (2023-05-22T21:16:01Z) - Low-Rank Covariance Completion for Graph Quilting with Applications to Functional Connectivity [8.500141848121782]
In many calcium imaging data sets, the full population of neurons is not recorded simultaneously, but instead in partially overlapping blocks.
In this paper, we introduce a novel two-step approach to Graph Quilting, which first imputes the nuclear structure matrix using low-rank co completion.
We show the efficacy of these methods for estimating connectivity from calcium imaging data.
arXiv Detail & Related papers (2022-09-17T08:03:46Z) - Contrastive Brain Network Learning via Hierarchical Signed Graph Pooling
Model [64.29487107585665]
Graph representation learning techniques on brain functional networks can facilitate the discovery of novel biomarkers for clinical phenotypes and neurodegenerative diseases.
Here, we propose an interpretable hierarchical signed graph representation learning model to extract graph-level representations from brain functional networks.
In order to further improve the model performance, we also propose a new strategy to augment functional brain network data for contrastive learning.
arXiv Detail & Related papers (2022-07-14T20:03:52Z) - Self-Supervised Graph Representation Learning for Neuronal Morphologies [75.38832711445421]
We present GraphDINO, a data-driven approach to learn low-dimensional representations of 3D neuronal morphologies from unlabeled datasets.
We show, in two different species and across multiple brain areas, that this method yields morphological cell type clusterings on par with manual feature-based classification by experts.
Our method could potentially enable data-driven discovery of novel morphological features and cell types in large-scale datasets.
arXiv Detail & Related papers (2021-12-23T12:17:47Z) - Overcoming the Domain Gap in Contrastive Learning of Neural Action
Representations [60.47807856873544]
A fundamental goal in neuroscience is to understand the relationship between neural activity and behavior.
We generated a new multimodal dataset consisting of the spontaneous behaviors generated by fruit flies.
This dataset and our new set of augmentations promise to accelerate the application of self-supervised learning methods in neuroscience.
arXiv Detail & Related papers (2021-11-29T15:27:51Z) - Gaussian Graphical Model Selection for Huge Data via Minipatch Learning [1.2891210250935146]
We propose the Minipatch Graph (MPGraph) estimator to solve the problem of graphical model selection.
MPGraph is a generalization of thresholded graph estimators fit to tiny, random subsets of both the observations and the nodes.
We prove that our algorithm achieves finite-sample graph selection consistency.
arXiv Detail & Related papers (2021-10-22T21:06:48Z) - Learning identifiable and interpretable latent models of
high-dimensional neural activity using pi-VAE [10.529943544385585]
We propose a method that integrates key ingredients from latent models and traditional neural encoding models.
Our method, pi-VAE, is inspired by recent progress on identifiable variational auto-encoder.
We validate pi-VAE using synthetic data, and apply it to analyze neurophysiological datasets from rat hippocampus and macaque motor cortex.
arXiv Detail & Related papers (2020-11-09T22:00:38Z) - Differentiable Causal Discovery from Interventional Data [141.41931444927184]
We propose a theoretically-grounded method based on neural networks that can leverage interventional data.
We show that our approach compares favorably to the state of the art in a variety of settings.
arXiv Detail & Related papers (2020-07-03T15:19:17Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.