C$^2$VAE: Gaussian Copula-based VAE Differing Disentangled from Coupled
Representations with Contrastive Posterior
- URL: http://arxiv.org/abs/2309.13303v1
- Date: Sat, 23 Sep 2023 08:33:48 GMT
- Title: C$^2$VAE: Gaussian Copula-based VAE Differing Disentangled from Coupled
Representations with Contrastive Posterior
- Authors: Zhangkai Wu and Longbing Cao
- Abstract summary: We present a self-supervised variational autoencoder (VAE) to jointly learn disentangled and dependent hidden factors.
We then enhance disentangled representation learning by a self-supervised classifier to eliminate coupled representations in a contrastive manner.
- Score: 36.2531431458649
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a self-supervised variational autoencoder (VAE) to jointly learn
disentangled and dependent hidden factors and then enhance disentangled
representation learning by a self-supervised classifier to eliminate coupled
representations in a contrastive manner. To this end, a Contrastive Copula VAE
(C$^2$VAE) is introduced without relying on prior knowledge about data in the
probabilistic principle and involving strong modeling assumptions on the
posterior in the neural architecture. C$^2$VAE simultaneously factorizes the
posterior (evidence lower bound, ELBO) with total correlation (TC)-driven
decomposition for learning factorized disentangled representations and extracts
the dependencies between hidden features by a neural Gaussian copula for copula
coupled representations. Then, a self-supervised contrastive classifier
differentiates the disentangled representations from the coupled
representations, where a contrastive loss regularizes this contrastive
classification together with the TC loss for eliminating entangled factors and
strengthening disentangled representations. C$^2$VAE demonstrates a strong
effect in enhancing disentangled representation learning. C$^2$VAE further
contributes to improved optimization addressing the TC-based VAE instability
and the trade-off between reconstruction and representation.
Related papers
- Canonical Correlation Guided Deep Neural Network [14.188285111418516]
We present a canonical correlation guided learning framework, which allows to be realized by deep neural networks (CCDNN)
In the proposed method, the optimization formulation is not restricted to maximize correlation, instead we make canonical correlation as a constraint.
To reduce the redundancy induced by correlation, a redundancy filter is designed.
arXiv Detail & Related papers (2024-09-28T16:08:44Z) - Disentanglement with Factor Quantized Variational Autoencoders [11.086500036180222]
We propose a discrete variational autoencoder (VAE) based model where the ground truth information about the generative factors are not provided to the model.
We demonstrate the advantages of learning discrete representations over learning continuous representations in facilitating disentanglement.
Our method called FactorQVAE is the first method that combines optimization based disentanglement approaches with discrete representation learning.
arXiv Detail & Related papers (2024-09-23T09:33:53Z) - Siamese Representation Learning for Unsupervised Relation Extraction [5.776369192706107]
Unsupervised relation extraction (URE) aims at discovering underlying relations between named entity pairs from open-domain plain text.
Existing URE models utilizing contrastive learning, which attract positive samples and repulse negative samples to promote better separation, have got decent effect.
We propose Siamese Representation Learning for Unsupervised Relation Extraction -- a novel framework to simply leverage positive pairs to representation learning.
arXiv Detail & Related papers (2023-10-01T02:57:43Z) - Learning Disentangled Discrete Representations [22.5004558029479]
We show the relationship between discrete latent spaces and disentangled representations by replacing the standard Gaussian variational autoencoder with a tailored categorical variational autoencoder.
We provide both analytical and empirical findings that demonstrate the advantages of discrete VAEs for learning disentangled representations.
arXiv Detail & Related papers (2023-07-26T12:29:58Z) - Understanding Augmentation-based Self-Supervised Representation Learning
via RKHS Approximation and Regression [53.15502562048627]
Recent work has built the connection between self-supervised learning and the approximation of the top eigenspace of a graph Laplacian operator.
This work delves into a statistical analysis of augmentation-based pretraining.
arXiv Detail & Related papers (2023-06-01T15:18:55Z) - Learning Partial Correlation based Deep Visual Representation for Image
Classification [61.0532370259644]
We formulate sparse inverse covariance estimation (SICE) as a novel structured layer of CNN.
Our work obtains a partial correlation based deep visual representation and mitigates the small sample problem.
Experiments show the efficacy and superior classification performance of our model.
arXiv Detail & Related papers (2023-04-23T10:09:01Z) - Causal Disentangled Variational Auto-Encoder for Preference
Understanding in Recommendation [50.93536377097659]
This paper introduces the Causal Disentangled Variational Auto-Encoder (CaD-VAE), a novel approach for learning causal disentangled representations from interaction data in recommender systems.
The approach utilizes structural causal models to generate causal representations that describe the causal relationship between latent factors.
arXiv Detail & Related papers (2023-04-17T00:10:56Z) - HiURE: Hierarchical Exemplar Contrastive Learning for Unsupervised
Relation Extraction [60.80849503639896]
Unsupervised relation extraction aims to extract the relationship between entities from natural language sentences without prior information on relational scope or distribution.
We propose a novel contrastive learning framework named HiURE, which has the capability to derive hierarchical signals from relational feature space using cross hierarchy attention.
Experimental results on two public datasets demonstrate the advanced effectiveness and robustness of HiURE on unsupervised relation extraction when compared with state-of-the-art models.
arXiv Detail & Related papers (2022-05-04T17:56:48Z) - Regularizing Variational Autoencoder with Diversity and Uncertainty
Awareness [61.827054365139645]
Variational Autoencoder (VAE) approximates the posterior of latent variables based on amortized variational inference.
We propose an alternative model, DU-VAE, for learning a more Diverse and less Uncertain latent space.
arXiv Detail & Related papers (2021-10-24T07:58:13Z) - Learning disentangled representations with the Wasserstein Autoencoder [22.54887526392739]
We propose TCWAE (Total Correlation Wasserstein Autoencoder) to penalize the total correlation in latent variables.
We show that working in the WAE paradigm naturally enables the separation of the total-correlation term, thus providing disentanglement control over the learned representation.
We further study the trade off between disentanglement and reconstruction on more-difficult data sets with unknown generative factors, where the flexibility of the WAE paradigm in the reconstruction term improves reconstructions.
arXiv Detail & Related papers (2020-10-07T14:52:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.