Variational Interpretable Learning from Multi-view Data
- URL: http://arxiv.org/abs/2202.13503v2
- Date: Tue, 1 Mar 2022 20:35:50 GMT
- Title: Variational Interpretable Learning from Multi-view Data
- Authors: Lin Qiu, Lynn Lin, Vernon M. Chinchilli
- Abstract summary: DICCA is designed to disentangle both the shared and view-specific variations for multi-view data.
Empirical results on real-world datasets show that our methods are competitive across domains.
- Score: 2.687817337319978
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The main idea of canonical correlation analysis (CCA) is to map different
views onto a common latent space with maximum correlation. We propose a deep
interpretable variational canonical correlation analysis (DICCA) for multi-view
learning. The developed model extends the existing latent variable model for
linear CCA to nonlinear models through the use of deep generative networks.
DICCA is designed to disentangle both the shared and view-specific variations
for multi-view data. To further make the model more interpretable, we place a
sparsity-inducing prior on the latent weight with a structured variational
autoencoder that is comprised of view-specific generators. Empirical results on
real-world datasets show that our methods are competitive across domains.
Related papers
- Learning Divergence Fields for Shift-Robust Graph Representations [73.11818515795761]
In this work, we propose a geometric diffusion model with learnable divergence fields for the challenging problem with interdependent data.
We derive a new learning objective through causal inference, which can guide the model to learn generalizable patterns of interdependence that are insensitive across domains.
arXiv Detail & Related papers (2024-06-07T14:29:21Z) - Learning multi-modal generative models with permutation-invariant encoders and tighter variational objectives [5.549794481031468]
Devising deep latent variable models for multi-modal data has been a long-standing theme in machine learning research.
In this work, we consider a variational objective that can tightly approximate the data log-likelihood.
We develop more flexible aggregation schemes that avoid the inductive biases in PoE or MoE approaches.
arXiv Detail & Related papers (2023-09-01T10:32:21Z) - Dynamic Latent Separation for Deep Learning [67.62190501599176]
A core problem in machine learning is to learn expressive latent variables for model prediction on complex data.
Here, we develop an approach that improves expressiveness, provides partial interpretation, and is not restricted to specific applications.
arXiv Detail & Related papers (2022-10-07T17:56:53Z) - Encoding Domain Knowledge in Multi-view Latent Variable Models: A
Bayesian Approach with Structured Sparsity [7.811916700683125]
MuVI is a novel approach for domain-informed multi-view latent variable models.
We demonstrate that our model is able to integrate noisy domain expertise in form of feature sets.
arXiv Detail & Related papers (2022-04-13T08:22:31Z) - Dynamically-Scaled Deep Canonical Correlation Analysis [77.34726150561087]
Canonical Correlation Analysis (CCA) is a method for feature extraction of two views by finding maximally correlated linear projections of them.
We introduce a novel dynamic scaling method for training an input-dependent canonical correlation model.
arXiv Detail & Related papers (2022-03-23T12:52:49Z) - Improving the Reconstruction of Disentangled Representation Learners via Multi-Stage Modeling [54.94763543386523]
Current autoencoder-based disentangled representation learning methods achieve disentanglement by penalizing the ( aggregate) posterior to encourage statistical independence of the latent factors.
We present a novel multi-stage modeling approach where the disentangled factors are first learned using a penalty-based disentangled representation learning method.
Then, the low-quality reconstruction is improved with another deep generative model that is trained to model the missing correlated latent variables.
arXiv Detail & Related papers (2020-10-25T18:51:15Z) - Bayesian Sparse Factor Analysis with Kernelized Observations [67.60224656603823]
Multi-view problems can be faced with latent variable models.
High-dimensionality and non-linear issues are traditionally handled by kernel methods.
We propose merging both approaches into single model.
arXiv Detail & Related papers (2020-06-01T14:25:38Z) - Variational Inference for Deep Probabilistic Canonical Correlation
Analysis [49.36636239154184]
We propose a deep probabilistic multi-view model that is composed of a linear multi-view layer and deep generative networks as observation models.
An efficient variational inference procedure is developed that approximates the posterior distributions of the latent probabilistic multi-view layer.
A generalization to models with arbitrary number of views is also proposed.
arXiv Detail & Related papers (2020-03-09T17:51:15Z) - Multiview Representation Learning for a Union of Subspaces [38.68763142172997]
We show that a proposed model and a set of simple mixtures yield improvements over standard CCA.
Our correlation-based objective meaningfully generalizes the CCA objective to a mixture of CCA models.
arXiv Detail & Related papers (2019-12-30T00:44:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.