Learning Exponential Family Graphical Models with Latent Variables using
Regularized Conditional Likelihood
- URL: http://arxiv.org/abs/2010.09386v1
- Date: Mon, 19 Oct 2020 11:16:26 GMT
- Title: Learning Exponential Family Graphical Models with Latent Variables using
Regularized Conditional Likelihood
- Authors: Armeen Taeb, Parikshit Shah, Venkat Chandrasekaran
- Abstract summary: We present a new convex relaxation framework based on regularized conditional likelihood for latent-variable graphical modeling.
We demonstrate the utility and flexibility of our framework via a series of numerical experiments on synthetic as well as real data.
- Score: 10.21814909876358
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fitting a graphical model to a collection of random variables given sample
observations is a challenging task if the observed variables are influenced by
latent variables, which can induce significant confounding statistical
dependencies among the observed variables. We present a new convex relaxation
framework based on regularized conditional likelihood for latent-variable
graphical modeling in which the conditional distribution of the observed
variables conditioned on the latent variables is given by an exponential family
graphical model. In comparison to previously proposed tractable methods that
proceed by characterizing the marginal distribution of the observed variables,
our approach is applicable in a broader range of settings as it does not
require knowledge about the specific form of distribution of the latent
variables and it can be specialized to yield tractable approaches to problems
in which the observed data are not well-modeled as Gaussian. We demonstrate the
utility and flexibility of our framework via a series of numerical experiments
on synthetic as well as real data.
Related papers
- Estimating Causal Effects from Learned Causal Networks [56.14597641617531]
We propose an alternative paradigm for answering causal-effect queries over discrete observable variables.
We learn the causal Bayesian network and its confounding latent variables directly from the observational data.
We show that this emphmodel completion learning approach can be more effective than estimand approaches.
arXiv Detail & Related papers (2024-08-26T08:39:09Z) - Learning Divergence Fields for Shift-Robust Graph Representations [73.11818515795761]
In this work, we propose a geometric diffusion model with learnable divergence fields for the challenging problem with interdependent data.
We derive a new learning objective through causal inference, which can guide the model to learn generalizable patterns of interdependence that are insensitive across domains.
arXiv Detail & Related papers (2024-06-07T14:29:21Z) - Extremal graphical modeling with latent variables [0.0]
We propose a tractable convex program for learning extremal graphical models in the presence of latent variables.
Our approach decomposes the H"usler-Reiss precision matrix into a sparse component encoding the graphical structure.
We show that it consistently recovers the conditional graph as well as the number of latent variables.
arXiv Detail & Related papers (2024-03-14T17:45:24Z) - Identifiable Latent Polynomial Causal Models Through the Lens of Change [82.14087963690561]
Causal representation learning aims to unveil latent high-level causal representations from observed low-level data.
One of its primary tasks is to provide reliable assurance of identifying these latent causal models, known as identifiability.
arXiv Detail & Related papers (2023-10-24T07:46:10Z) - High-Dimensional Undirected Graphical Models for Arbitrary Mixed Data [2.2871867623460207]
In many applications data span variables of different types, whose principled joint analysis is nontrivial.
Recent advances have shown how the binary-continuous case can be tackled, but the general mixed variable type regime remains challenging.
We propose flexible and scalable methodology for data with variables of entirely general mixed type.
arXiv Detail & Related papers (2022-11-21T18:21:31Z) - Amortised Inference in Structured Generative Models with Explaining Away [16.92791301062903]
We extend the output of amortised variational inference to incorporate structured factors over multiple variables.
We show that appropriately parameterised factors can be combined efficiently with variational message passing in elaborate graphical structures.
We then fit the structured model to high-dimensional neural spiking time-series from the hippocampus of freely moving rodents.
arXiv Detail & Related papers (2022-09-12T12:52:15Z) - Identifying Weight-Variant Latent Causal Models [82.14087963690561]
We find that transitivity acts as a key role in impeding the identifiability of latent causal representations.
Under some mild assumptions, we can show that the latent causal representations can be identified up to trivial permutation and scaling.
We propose a novel method, termed Structural caUsAl Variational autoEncoder, which directly learns latent causal representations and causal relationships among them.
arXiv Detail & Related papers (2022-08-30T11:12:59Z) - A Graphical Model for Fusing Diverse Microbiome Data [2.385985842958366]
We introduce a flexible multinomial-Gaussian generative model for jointly modeling such count data.
We present a computationally scalable variational Expectation-Maximization (EM) algorithm for inferring the latent variables and the parameters of the model.
arXiv Detail & Related papers (2022-08-21T17:54:39Z) - Linear Discriminant Analysis with High-dimensional Mixed Variables [10.774094462083843]
This paper develops a novel approach for classifying high-dimensional observations with mixed variables.
We overcome the challenge of having to split data into exponentially many cells.
Results on the estimation accuracy and the misclassification rates are established.
arXiv Detail & Related papers (2021-12-14T03:57:56Z) - Learning Conditional Invariance through Cycle Consistency [60.85059977904014]
We propose a novel approach to identify meaningful and independent factors of variation in a dataset.
Our method involves two separate latent subspaces for the target property and the remaining input information.
We demonstrate on synthetic and molecular data that our approach identifies more meaningful factors which lead to sparser and more interpretable models.
arXiv Detail & Related papers (2021-11-25T17:33:12Z) - Learning Disentangled Representations with Latent Variation
Predictability [102.4163768995288]
This paper defines the variation predictability of latent disentangled representations.
Within an adversarial generation process, we encourage variation predictability by maximizing the mutual information between latent variations and corresponding image pairs.
We develop an evaluation metric that does not rely on the ground-truth generative factors to measure the disentanglement of latent representations.
arXiv Detail & Related papers (2020-07-25T08:54:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.