Gaussian Mixture Graphical Lasso with Application to Edge Detection in
Brain Networks
- URL: http://arxiv.org/abs/2101.05348v1
- Date: Wed, 13 Jan 2021 21:15:30 GMT
- Title: Gaussian Mixture Graphical Lasso with Application to Edge Detection in
Brain Networks
- Authors: Hang Yin, Xinyue Liu, Xiangnan Kong
- Abstract summary: This work is inspired by Latent DirichletAllocation (LDA)
We propose a novel model called GaussianMixture Graphical Lasso (MGL)
MGL learns the proportionsof signals generated by each mixture component and their parameters iteratively via an EM framework.
- Score: 21.49394455839253
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Sparse inverse covariance estimation (i.e., edge de-tection) is an important
research problem in recent years, wherethe goal is to discover the direct
connections between a set ofnodes in a networked system based upon the observed
nodeactivities. Existing works mainly focus on unimodal distributions,where it
is usually assumed that the observed activities aregenerated from
asingleGaussian distribution (i.e., one graph).However, this assumption is too
strong for many real-worldapplications. In many real-world applications (e.g.,
brain net-works), the node activities usually exhibit much more complexpatterns
that are difficult to be captured by one single Gaussiandistribution. In this
work, we are inspired by Latent DirichletAllocation (LDA) [4] and consider
modeling the edge detectionproblem as estimating a mixture ofmultipleGaussian
distribu-tions, where each corresponds to a separate sub-network. Toaddress
this problem, we propose a novel model called GaussianMixture Graphical Lasso
(MGL). It learns the proportionsof signals generated by each mixture component
and theirparameters iteratively via an EM framework. To obtain
moreinterpretable networks, MGL imposes a special regularization,called Mutual
Exclusivity Regularization (MER), to minimize theoverlap between different
sub-networks. MER also addresses thecommon issues in read-world data sets,i.e.,
noisy observationsand small sample size. Through the extensive experiments
onsynthetic and real brain data sets, the results demonstrate thatMGL can
effectively discover multiple connectivity structuresfrom the observed node
activities
Related papers
- LGU-SLAM: Learnable Gaussian Uncertainty Matching with Deformable Correlation Sampling for Deep Visual SLAM [11.715999663401591]
Learnable 2D Gaussian uncertainty model is designed to associate matching-frame pairs.
A multi-scale deformable correlation strategy is devised to adaptively fine-tune the sampling of each direction.
Experiments on real-world and synthetic datasets are conducted to validate the effectiveness and superiority of our method.
arXiv Detail & Related papers (2024-10-30T17:20:08Z) - Generalization of Graph Neural Networks is Robust to Model Mismatch [84.01980526069075]
Graph neural networks (GNNs) have demonstrated their effectiveness in various tasks supported by their generalization capabilities.
In this paper, we examine GNNs that operate on geometric graphs generated from manifold models.
Our analysis reveals the robustness of the GNN generalization in the presence of such model mismatch.
arXiv Detail & Related papers (2024-08-25T16:00:44Z) - Differentially Private Non-convex Learning for Multi-layer Neural
Networks [35.24835396398768]
This paper focuses on the problem of Differentially Private Tangent Optimization for (multi-layer) fully connected neural networks with a single output node.
By utilizing recent advances in Neural Kernel theory, we provide the first excess population risk when both the sample size and the width of the network are sufficiently large.
arXiv Detail & Related papers (2023-10-12T15:48:14Z) - Distributed Learning over Networks with Graph-Attention-Based
Personalization [49.90052709285814]
We propose a graph-based personalized algorithm (GATTA) for distributed deep learning.
In particular, the personalized model in each agent is composed of a global part and a node-specific part.
By treating each agent as one node in a graph the node-specific parameters as its features, the benefits of the graph attention mechanism can be inherited.
arXiv Detail & Related papers (2023-05-22T13:48:30Z) - On the Effective Number of Linear Regions in Shallow Univariate ReLU
Networks: Convergence Guarantees and Implicit Bias [50.84569563188485]
We show that gradient flow converges in direction when labels are determined by the sign of a target network with $r$ neurons.
Our result may already hold for mild over- parameterization, where the width is $tildemathcalO(r)$ and independent of the sample size.
arXiv Detail & Related papers (2022-05-18T16:57:10Z) - Graph-Augmented Normalizing Flows for Anomaly Detection of Multiple Time
Series [12.745860899424532]
Anomaly detection is a widely studied task for a broad variety of data types.
We propose a graph-augmented normalizing flow approach for anomaly detection.
We conduct experiments on real-world datasets and demonstrate the effectiveness of GANF.
arXiv Detail & Related papers (2022-02-16T04:42:53Z) - GANs with Variational Entropy Regularizers: Applications in Mitigating
the Mode-Collapse Issue [95.23775347605923]
Building on the success of deep learning, Generative Adversarial Networks (GANs) provide a modern approach to learn a probability distribution from observed samples.
GANs often suffer from the mode collapse issue where the generator fails to capture all existing modes of the input distribution.
We take an information-theoretic approach and maximize a variational lower bound on the entropy of the generated samples to increase their diversity.
arXiv Detail & Related papers (2020-09-24T19:34:37Z) - Multi-Level Local SGD for Heterogeneous Hierarchical Networks [11.699472346137739]
We propose Multi-Level Local SGD, a distributed gradient method for a learning, non- objective framework in a heterogeneous network.
We first provide a unified mathematical that describes the Multi-Level Local SGD algorithm.
We then present a theoretical analysis of the algorithm.
arXiv Detail & Related papers (2020-07-27T19:14:23Z) - A Multi-Semantic Metapath Model for Large Scale Heterogeneous Network
Representation Learning [52.83948119677194]
We propose a multi-semantic metapath (MSM) model for large scale heterogeneous representation learning.
Specifically, we generate multi-semantic metapath-based random walks to construct the heterogeneous neighborhood to handle the unbalanced distributions.
We conduct systematical evaluations for the proposed framework on two challenging datasets: Amazon and Alibaba.
arXiv Detail & Related papers (2020-07-19T22:50:20Z) - Block-Approximated Exponential Random Graphs [77.4792558024487]
An important challenge in the field of exponential random graphs (ERGs) is the fitting of non-trivial ERGs on large graphs.
We propose an approximative framework to such non-trivial ERGs that result in dyadic independence (i.e., edge independent) distributions.
Our methods are scalable to sparse graphs consisting of millions of nodes.
arXiv Detail & Related papers (2020-02-14T11:42:16Z) - Community Detection on Mixture Multi-layer Networks via Regularized
Tensor Decomposition [12.244594819580831]
We study the problem of community detection in multi-layer networks, where pairs of nodes can be related in multiple modalities.
We propose a tensor-based algorithm (TWIST) to reveal both global/local memberships of nodes, and memberships of layers.
arXiv Detail & Related papers (2020-02-10T06:19:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.