Gaussian Mixture Graphical Lasso with Application to Edge Detection in
Brain Networks
- URL: http://arxiv.org/abs/2101.05348v1
- Date: Wed, 13 Jan 2021 21:15:30 GMT
- Title: Gaussian Mixture Graphical Lasso with Application to Edge Detection in
Brain Networks
- Authors: Hang Yin, Xinyue Liu, Xiangnan Kong
- Abstract summary: This work is inspired by Latent DirichletAllocation (LDA)
We propose a novel model called GaussianMixture Graphical Lasso (MGL)
MGL learns the proportionsof signals generated by each mixture component and their parameters iteratively via an EM framework.
- Score: 21.49394455839253
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Sparse inverse covariance estimation (i.e., edge de-tection) is an important
research problem in recent years, wherethe goal is to discover the direct
connections between a set ofnodes in a networked system based upon the observed
nodeactivities. Existing works mainly focus on unimodal distributions,where it
is usually assumed that the observed activities aregenerated from
asingleGaussian distribution (i.e., one graph).However, this assumption is too
strong for many real-worldapplications. In many real-world applications (e.g.,
brain net-works), the node activities usually exhibit much more complexpatterns
that are difficult to be captured by one single Gaussiandistribution. In this
work, we are inspired by Latent DirichletAllocation (LDA) [4] and consider
modeling the edge detectionproblem as estimating a mixture ofmultipleGaussian
distribu-tions, where each corresponds to a separate sub-network. Toaddress
this problem, we propose a novel model called GaussianMixture Graphical Lasso
(MGL). It learns the proportionsof signals generated by each mixture component
and theirparameters iteratively via an EM framework. To obtain
moreinterpretable networks, MGL imposes a special regularization,called Mutual
Exclusivity Regularization (MER), to minimize theoverlap between different
sub-networks. MER also addresses thecommon issues in read-world data sets,i.e.,
noisy observationsand small sample size. Through the extensive experiments
onsynthetic and real brain data sets, the results demonstrate thatMGL can
effectively discover multiple connectivity structuresfrom the observed node
activities
Related papers
- Alleviating Structural Distribution Shift in Graph Anomaly Detection [70.1022676681496]
Graph anomaly detection (GAD) is a challenging binary classification problem.
Gallon neural networks (GNNs) benefit the classification of normals from aggregating homophilous neighbors.
We propose a framework to mitigate the effect of heterophilous neighbors and make them invariant.
arXiv Detail & Related papers (2024-01-25T13:07:34Z) - Differentially Private Non-convex Learning for Multi-layer Neural
Networks [35.24835396398768]
This paper focuses on the problem of Differentially Private Tangent Optimization for (multi-layer) fully connected neural networks with a single output node.
By utilizing recent advances in Neural Kernel theory, we provide the first excess population risk when both the sample size and the width of the network are sufficiently large.
arXiv Detail & Related papers (2023-10-12T15:48:14Z) - Distributed Learning over Networks with Graph-Attention-Based
Personalization [49.90052709285814]
We propose a graph-based personalized algorithm (GATTA) for distributed deep learning.
In particular, the personalized model in each agent is composed of a global part and a node-specific part.
By treating each agent as one node in a graph the node-specific parameters as its features, the benefits of the graph attention mechanism can be inherited.
arXiv Detail & Related papers (2023-05-22T13:48:30Z) - Bounding generalization error with input compression: An empirical study
with infinite-width networks [16.17600110257266]
Estimating the Generalization Error (GE) of Deep Neural Networks (DNNs) is an important task that often relies on availability of held-out data.
In search of a quantity relevant to GE, we investigate the Mutual Information (MI) between the input and final layer representations.
An existing input compression-based GE bound is used to link MI and GE.
arXiv Detail & Related papers (2022-07-19T17:05:02Z) - On the Effective Number of Linear Regions in Shallow Univariate ReLU
Networks: Convergence Guarantees and Implicit Bias [50.84569563188485]
We show that gradient flow converges in direction when labels are determined by the sign of a target network with $r$ neurons.
Our result may already hold for mild over- parameterization, where the width is $tildemathcalO(r)$ and independent of the sample size.
arXiv Detail & Related papers (2022-05-18T16:57:10Z) - Graph-Augmented Normalizing Flows for Anomaly Detection of Multiple Time
Series [12.745860899424532]
Anomaly detection is a widely studied task for a broad variety of data types.
We propose a graph-augmented normalizing flow approach for anomaly detection.
We conduct experiments on real-world datasets and demonstrate the effectiveness of GANF.
arXiv Detail & Related papers (2022-02-16T04:42:53Z) - GANs with Variational Entropy Regularizers: Applications in Mitigating
the Mode-Collapse Issue [95.23775347605923]
Building on the success of deep learning, Generative Adversarial Networks (GANs) provide a modern approach to learn a probability distribution from observed samples.
GANs often suffer from the mode collapse issue where the generator fails to capture all existing modes of the input distribution.
We take an information-theoretic approach and maximize a variational lower bound on the entropy of the generated samples to increase their diversity.
arXiv Detail & Related papers (2020-09-24T19:34:37Z) - Multi-Level Local SGD for Heterogeneous Hierarchical Networks [11.699472346137739]
We propose Multi-Level Local SGD, a distributed gradient method for a learning, non- objective framework in a heterogeneous network.
We first provide a unified mathematical that describes the Multi-Level Local SGD algorithm.
We then present a theoretical analysis of the algorithm.
arXiv Detail & Related papers (2020-07-27T19:14:23Z) - A Multi-Semantic Metapath Model for Large Scale Heterogeneous Network
Representation Learning [52.83948119677194]
We propose a multi-semantic metapath (MSM) model for large scale heterogeneous representation learning.
Specifically, we generate multi-semantic metapath-based random walks to construct the heterogeneous neighborhood to handle the unbalanced distributions.
We conduct systematical evaluations for the proposed framework on two challenging datasets: Amazon and Alibaba.
arXiv Detail & Related papers (2020-07-19T22:50:20Z) - Block-Approximated Exponential Random Graphs [77.4792558024487]
An important challenge in the field of exponential random graphs (ERGs) is the fitting of non-trivial ERGs on large graphs.
We propose an approximative framework to such non-trivial ERGs that result in dyadic independence (i.e., edge independent) distributions.
Our methods are scalable to sparse graphs consisting of millions of nodes.
arXiv Detail & Related papers (2020-02-14T11:42:16Z) - Community Detection on Mixture Multi-layer Networks via Regularized
Tensor Decomposition [12.244594819580831]
We study the problem of community detection in multi-layer networks, where pairs of nodes can be related in multiple modalities.
We propose a tensor-based algorithm (TWIST) to reveal both global/local memberships of nodes, and memberships of layers.
arXiv Detail & Related papers (2020-02-10T06:19:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.