Mixture Representation Learning with Coupled Autoencoders
- URL: http://arxiv.org/abs/2007.09880v3
- Date: Tue, 13 Apr 2021 02:02:27 GMT
- Title: Mixture Representation Learning with Coupled Autoencoders
- Authors: Yeganeh M. Marghi, Rohan Gala, Uygar S\"umb\"ul
- Abstract summary: We propose an unsupervised variational framework using multiple interacting networks called cpl-mixVAE.
In this framework, the mixture representation of each network is regularized by imposing a consensus constraint on the discrete factor.
We use the proposed method to jointly uncover discrete and continuous factors of variability describing gene expression in a single-cell transcriptomic dataset.
- Score: 1.589915930948668
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Jointly identifying a mixture of discrete and continuous factors of
variability without supervision is a key problem in unraveling complex
phenomena. Variational inference has emerged as a promising method to learn
interpretable mixture representations. However, posterior approximation in
high-dimensional latent spaces, particularly for discrete factors remains
challenging. Here, we propose an unsupervised variational framework using
multiple interacting networks called cpl-mixVAE that scales well to
high-dimensional discrete settings. In this framework, the mixture
representation of each network is regularized by imposing a consensus
constraint on the discrete factor. We justify the use of this framework by
providing both theoretical and experimental results. Finally, we use the
proposed method to jointly uncover discrete and continuous factors of
variability describing gene expression in a single-cell transcriptomic dataset
profiling more than a hundred cortical neuron types.
Related papers
- Collaborative Heterogeneous Causal Inference Beyond Meta-analysis [68.4474531911361]
We propose a collaborative inverse propensity score estimator for causal inference with heterogeneous data.
Our method shows significant improvements over the methods based on meta-analysis when heterogeneity increases.
arXiv Detail & Related papers (2024-04-24T09:04:36Z) - A Generalized Multiscale Bundle-Based Hyperspectral Sparse Unmixing
Algorithm [8.616208042031877]
In hyperspectral sparse unmixing, a successful approach employs spectral bundles to address the variability of the endmembers in the spatial domain.
We generalize a multiscale spatial regularization approach to solve the unmixing problem by incorporating group sparsity-inducing mixed norms.
arXiv Detail & Related papers (2024-01-24T00:37:14Z) - Nonparametric Partial Disentanglement via Mechanism Sparsity: Sparse
Actions, Interventions and Sparse Temporal Dependencies [58.179981892921056]
This work introduces a novel principle for disentanglement we call mechanism sparsity regularization.
We propose a representation learning method that induces disentanglement by simultaneously learning the latent factors.
We show that the latent factors can be recovered by regularizing the learned causal graph to be sparse.
arXiv Detail & Related papers (2024-01-10T02:38:21Z) - Learning Linear Causal Representations from Interventions under General
Nonlinear Mixing [52.66151568785088]
We prove strong identifiability results given unknown single-node interventions without access to the intervention targets.
This is the first instance of causal identifiability from non-paired interventions for deep neural network embeddings.
arXiv Detail & Related papers (2023-06-04T02:32:12Z) - Structure-preserving GANs [6.438897276587413]
We introduce structure-preserving GANs as a data-efficient framework for learning distributions.
We show that we can reduce the discriminator space to its projection on the invariant discriminator space.
We contextualize our framework by building symmetry-preserving GANs for distributions with intrinsic group symmetry.
arXiv Detail & Related papers (2022-02-02T16:40:04Z) - Fluctuations, Bias, Variance & Ensemble of Learners: Exact Asymptotics
for Convex Losses in High-Dimension [25.711297863946193]
We develop a theory for the study of fluctuations in an ensemble of generalised linear models trained on different, but correlated, features.
We provide a complete description of the joint distribution of the empirical risk minimiser for generic convex loss and regularisation in the high-dimensional limit.
arXiv Detail & Related papers (2022-01-31T17:44:58Z) - Sparse Communication via Mixed Distributions [29.170302047339174]
We build theoretical foundations for "mixed random variables"
Our framework suggests two strategies for representing and sampling mixed random variables.
We experiment with both approaches on an emergent communication benchmark.
arXiv Detail & Related papers (2021-08-05T14:49:03Z) - Decentralized Local Stochastic Extra-Gradient for Variational
Inequalities [125.62877849447729]
We consider distributed variational inequalities (VIs) on domains with the problem data that is heterogeneous (non-IID) and distributed across many devices.
We make a very general assumption on the computational network that covers the settings of fully decentralized calculations.
We theoretically analyze its convergence rate in the strongly-monotone, monotone, and non-monotone settings.
arXiv Detail & Related papers (2021-06-15T17:45:51Z) - Learning Disentangled Representations with Latent Variation
Predictability [102.4163768995288]
This paper defines the variation predictability of latent disentangled representations.
Within an adversarial generation process, we encourage variation predictability by maximizing the mutual information between latent variations and corresponding image pairs.
We develop an evaluation metric that does not rely on the ground-truth generative factors to measure the disentanglement of latent representations.
arXiv Detail & Related papers (2020-07-25T08:54:26Z) - Accounting for Unobserved Confounding in Domain Generalization [107.0464488046289]
This paper investigates the problem of learning robust, generalizable prediction models from a combination of datasets.
Part of the challenge of learning robust models lies in the influence of unobserved confounders.
We demonstrate the empirical performance of our approach on healthcare data from different modalities.
arXiv Detail & Related papers (2020-07-21T08:18:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.