Learning High-Dimensional Differential Graphs From Multi-Attribute Data
- URL: http://arxiv.org/abs/2312.03761v1
- Date: Tue, 5 Dec 2023 18:54:46 GMT
- Title: Learning High-Dimensional Differential Graphs From Multi-Attribute Data
- Authors: Jitendra K Tugnait
- Abstract summary: We consider the problem of estimating differences in two Gaussian graphical models (GGMs) which are known to have similar structure.
Existing methods for differential graph estimation are based on single-attribute (SA) models.
In this paper, we analyze a group lasso penalized D-trace loss function approach for differential graph learning from multi-attribute data.
- Score: 12.94486861344922
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We consider the problem of estimating differences in two Gaussian graphical
models (GGMs) which are known to have similar structure. The GGM structure is
encoded in its precision (inverse covariance) matrix. In many applications one
is interested in estimating the difference in two precision matrices to
characterize underlying changes in conditional dependencies of two sets of
data. Existing methods for differential graph estimation are based on
single-attribute (SA) models where one associates a scalar random variable with
each node. In multi-attribute (MA) graphical models, each node represents a
random vector. In this paper, we analyze a group lasso penalized D-trace loss
function approach for differential graph learning from multi-attribute data. An
alternating direction method of multipliers (ADMM) algorithm is presented to
optimize the objective function. Theoretical analysis establishing consistency
in support recovery and estimation in high-dimensional settings is provided.
Numerical results based on synthetic as well as real data are presented.
Related papers
- Efficient learning of differential network in multi-source non-paranormal graphical models [2.5905193932831585]
This paper addresses learning of sparse structural changes or differential network between two classes of non-paranormal graphical models.
Our strategy in combining datasets from multiple sources is shown to be very effective in inferring differential network in real-world problems.
arXiv Detail & Related papers (2024-10-03T13:59:38Z) - Graph Fourier MMD for Signals on Graphs [67.68356461123219]
We propose a novel distance between distributions and signals on graphs.
GFMMD is defined via an optimal witness function that is both smooth on the graph and maximizes difference in expectation.
We showcase it on graph benchmark datasets as well as on single cell RNA-sequencing data analysis.
arXiv Detail & Related papers (2023-06-05T00:01:17Z) - Learning Graphical Factor Models with Riemannian Optimization [70.13748170371889]
This paper proposes a flexible algorithmic framework for graph learning under low-rank structural constraints.
The problem is expressed as penalized maximum likelihood estimation of an elliptical distribution.
We leverage geometries of positive definite matrices and positive semi-definite matrices of fixed rank that are well suited to elliptical models.
arXiv Detail & Related papers (2022-10-21T13:19:45Z) - Graph Polynomial Convolution Models for Node Classification of
Non-Homophilous Graphs [52.52570805621925]
We investigate efficient learning from higher-order graph convolution and learning directly from adjacency matrix for node classification.
We show that the resulting model lead to new graphs and residual scaling parameter.
We demonstrate that the proposed methods obtain improved accuracy for node-classification of non-homophilous parameters.
arXiv Detail & Related papers (2022-09-12T04:46:55Z) - uGLAD: Sparse graph recovery by optimizing deep unrolled networks [11.48281545083889]
We present a novel technique to perform sparse graph recovery by optimizing deep unrolled networks.
Our model, uGLAD, builds upon and extends the state-of-the-art model GLAD to the unsupervised setting.
We evaluate model results on synthetic Gaussian data, non-Gaussian data generated from Gene Regulatory Networks, and present a case study in anaerobic digestion.
arXiv Detail & Related papers (2022-05-23T20:20:27Z) - Learning Shared Kernel Models: the Shared Kernel EM algorithm [0.0]
Expectation maximisation (EM) is an unsupervised learning method for estimating the parameters of a finite mixture distribution.
We first present a rederivation of the standard EM algorithm using data association ideas from the field of multiple target tracking.
The same method is then applied to a little known but much more general type of supervised EM algorithm for shared kernel models.
arXiv Detail & Related papers (2022-05-15T10:10:08Z) - Inference of Multiscale Gaussian Graphical Model [0.0]
We propose a new method allowing to simultaneously infer a hierarchical clustering structure and the graphs describing the structure of independence at each level of the hierarchy.
Results on real and synthetic data are presented.
arXiv Detail & Related papers (2022-02-11T17:11:20Z) - Unfolding Projection-free SDP Relaxation of Binary Graph Classifier via
GDPA Linearization [59.87663954467815]
Algorithm unfolding creates an interpretable and parsimonious neural network architecture by implementing each iteration of a model-based algorithm as a neural layer.
In this paper, leveraging a recent linear algebraic theorem called Gershgorin disc perfect alignment (GDPA), we unroll a projection-free algorithm for semi-definite programming relaxation (SDR) of a binary graph.
Experimental results show that our unrolled network outperformed pure model-based graph classifiers, and achieved comparable performance to pure data-driven networks but using far fewer parameters.
arXiv Detail & Related papers (2021-09-10T07:01:15Z) - Sparse PCA via $l_{2,p}$-Norm Regularization for Unsupervised Feature
Selection [138.97647716793333]
We propose a simple and efficient unsupervised feature selection method, by combining reconstruction error with $l_2,p$-norm regularization.
We present an efficient optimization algorithm to solve the proposed unsupervised model, and analyse the convergence and computational complexity of the algorithm theoretically.
arXiv Detail & Related papers (2020-12-29T04:08:38Z) - Multilayer Clustered Graph Learning [66.94201299553336]
We use contrastive loss as a data fidelity term, in order to properly aggregate the observed layers into a representative graph.
Experiments show that our method leads to a clustered clusters w.r.t.
We learn a clustering algorithm for solving clustering problems.
arXiv Detail & Related papers (2020-10-29T09:58:02Z) - Learnable Graph-regularization for Matrix Decomposition [5.9394103049943485]
We propose a learnable graph-regularization model for matrix decomposition.
It builds a bridge between graph-regularized methods and probabilistic matrix decomposition models.
It learns two graphical structures in real-time in an iterative manner via sparse precision matrix estimation.
arXiv Detail & Related papers (2020-10-16T17:12:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.