Multi-view hierarchical Variational AutoEncoders with Factor Analysis
latent space
- URL: http://arxiv.org/abs/2207.09185v1
- Date: Tue, 19 Jul 2022 10:46:02 GMT
- Title: Multi-view hierarchical Variational AutoEncoders with Factor Analysis
latent space
- Authors: Alejandro Guerrero-L\'opez, Carlos Sevilla-Salcedo, Vanessa
G\'omez-Verdejo, Pablo M. Olmos
- Abstract summary: We propose a novel method to combine multiple Variational AutoEncoders with a Factor Analysis latent space.
We create an interpretable hierarchical dependency between private and shared information.
This way, the novel model is able to simultaneously: (i) learn from multiple heterogeneous views, (ii) obtain an interpretable hierarchical shared space, and (iii) perform transfer learning between generative models.
- Score: 67.60224656603823
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Real-world databases are complex, they usually present redundancy and shared
correlations between heterogeneous and multiple representations of the same
data. Thus, exploiting and disentangling shared information between views is
critical. For this purpose, recent studies often fuse all views into a shared
nonlinear complex latent space but they lose the interpretability. To overcome
this limitation, here we propose a novel method to combine multiple Variational
AutoEncoders (VAE) architectures with a Factor Analysis latent space (FA-VAE).
Concretely, we use a VAE to learn a private representation of each
heterogeneous view in a continuous latent space. Then, we model the shared
latent space by projecting every private variable to a low-dimensional latent
space using a linear projection matrix. Thus, we create an interpretable
hierarchical dependency between private and shared information. This way, the
novel model is able to simultaneously: (i) learn from multiple heterogeneous
views, (ii) obtain an interpretable hierarchical shared space, and, (iii)
perform transfer learning between generative models.
Related papers
- Learning Divergence Fields for Shift-Robust Graph Representations [73.11818515795761]
In this work, we propose a geometric diffusion model with learnable divergence fields for the challenging problem with interdependent data.
We derive a new learning objective through causal inference, which can guide the model to learn generalizable patterns of interdependence that are insensitive across domains.
arXiv Detail & Related papers (2024-06-07T14:29:21Z) - Generalizable Heterogeneous Federated Cross-Correlation and Instance
Similarity Learning [60.058083574671834]
This paper presents a novel FCCL+, federated correlation and similarity learning with non-target distillation.
For heterogeneous issue, we leverage irrelevant unlabeled public data for communication.
For catastrophic forgetting in local updating stage, FCCL+ introduces Federated Non Target Distillation.
arXiv Detail & Related papers (2023-09-28T09:32:27Z) - Subspace-Contrastive Multi-View Clustering [0.0]
We propose a novel Subspace-Contrastive Multi-View Clustering (SCMC) approach.
We employ view-specific auto-encoders to map the original multi-view data into compact features perceiving its nonlinear structures.
To demonstrate the effectiveness of the proposed model, we conduct a large number of comparative experiments on eight challenge datasets.
arXiv Detail & Related papers (2022-10-13T07:19:37Z) - Dynamic Latent Separation for Deep Learning [67.62190501599176]
A core problem in machine learning is to learn expressive latent variables for model prediction on complex data.
Here, we develop an approach that improves expressiveness, provides partial interpretation, and is not restricted to specific applications.
arXiv Detail & Related papers (2022-10-07T17:56:53Z) - Adaptively-weighted Integral Space for Fast Multiview Clustering [54.177846260063966]
We propose an Adaptively-weighted Integral Space for Fast Multiview Clustering (AIMC) with nearly linear complexity.
Specifically, view generation models are designed to reconstruct the view observations from the latent integral space.
Experiments conducted on several realworld datasets confirm the superiority of the proposed AIMC method.
arXiv Detail & Related papers (2022-08-25T05:47:39Z) - Tensor-based Multi-view Spectral Clustering via Shared Latent Space [14.470859959783995]
Multi-view Spectral Clustering (MvSC) attracts increasing attention due to diverse data sources.
New method for MvSC is proposed via a shared latent space from the Restricted Kernel Machine framework.
arXiv Detail & Related papers (2022-07-23T17:30:54Z) - Encoding Domain Knowledge in Multi-view Latent Variable Models: A
Bayesian Approach with Structured Sparsity [7.811916700683125]
MuVI is a novel approach for domain-informed multi-view latent variable models.
We demonstrate that our model is able to integrate noisy domain expertise in form of feature sets.
arXiv Detail & Related papers (2022-04-13T08:22:31Z) - Variational Interpretable Learning from Multi-view Data [2.687817337319978]
DICCA is designed to disentangle both the shared and view-specific variations for multi-view data.
Empirical results on real-world datasets show that our methods are competitive across domains.
arXiv Detail & Related papers (2022-02-28T01:56:44Z) - Latent Correlation-Based Multiview Learning and Self-Supervision: A
Unifying Perspective [41.80156041871873]
This work puts forth a theory-backed framework for unsupervised multiview learning.
Our development starts with proposing a multiview model, where each view is a nonlinear mixture of shared and private components.
In addition, the private information in each view can be provably disentangled from the shared using proper regularization design.
arXiv Detail & Related papers (2021-06-14T00:12:36Z) - Variational Inference for Deep Probabilistic Canonical Correlation
Analysis [49.36636239154184]
We propose a deep probabilistic multi-view model that is composed of a linear multi-view layer and deep generative networks as observation models.
An efficient variational inference procedure is developed that approximates the posterior distributions of the latent probabilistic multi-view layer.
A generalization to models with arbitrary number of views is also proposed.
arXiv Detail & Related papers (2020-03-09T17:51:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.