Evaluation of Latent Space Disentanglement in the Presence of
Interdependent Attributes
- URL: http://arxiv.org/abs/2110.05587v1
- Date: Mon, 11 Oct 2021 20:01:14 GMT
- Title: Evaluation of Latent Space Disentanglement in the Presence of
Interdependent Attributes
- Authors: Karn N. Watcharasupat and Alexander Lerch
- Abstract summary: Controllable music generation with deep generative models has become increasingly reliant on disentanglement learning techniques.
We propose a dependency-aware information metric as a drop-in replacement for MIG that accounts for the inherent relationship between semantic attributes.
- Score: 78.8942067357231
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Controllable music generation with deep generative models has become
increasingly reliant on disentanglement learning techniques. However, current
disentanglement metrics, such as mutual information gap (MIG), are often
inadequate and misleading when used for evaluating latent representations in
the presence of interdependent semantic attributes often encountered in
real-world music datasets. In this work, we propose a dependency-aware
information metric as a drop-in replacement for MIG that accounts for the
inherent relationship between semantic attributes.
Related papers
- A Hybrid Framework for Spatial Interpolation: Merging Data-driven with Domain Knowledge [2.6819326095717764]
We propose a hybrid framework that integrates data-driven spatial dependency feature extraction with rule-assisted spatial dependency function mapping.
We demonstrate the superior performance of our framework in two comparative application scenarios.
arXiv Detail & Related papers (2024-08-28T22:02:42Z) - Intrinsic Dimension Correlation: uncovering nonlinear connections in multimodal representations [0.4223422932643755]
This paper exploits the entanglement between intrinsic dimensionality and correlation to propose a metric that quantifies the correlation.
We first validate our method on synthetic data in controlled environments, showcasing its advantages and drawbacks compared to existing techniques.
We extend our analysis to large-scale applications in neural network representations.
arXiv Detail & Related papers (2024-06-22T10:36:04Z) - Spurious Correlations in Machine Learning: A Survey [27.949532561102206]
Machine learning systems are sensitive to spurious correlations between non-essential features of the inputs and labels.
These features and their correlations with the labels are known as "spurious" because they tend to change with shifts in real-world data distributions.
We provide a review of this issue, along with a taxonomy of current state-of-the-art methods for addressing spurious correlations in machine learning models.
arXiv Detail & Related papers (2024-02-20T04:49:34Z) - Disentanglement via Latent Quantization [60.37109712033694]
In this work, we construct an inductive bias towards encoding to and decoding from an organized latent space.
We demonstrate the broad applicability of this approach by adding it to both basic data-re (vanilla autoencoder) and latent-reconstructing (InfoGAN) generative models.
arXiv Detail & Related papers (2023-05-28T06:30:29Z) - Disentanglement and Generalization Under Correlation Shifts [22.499106910581958]
Correlations between factors of variation are prevalent in real-world data.
Machine learning algorithms may benefit from exploiting such correlations, as they can increase predictive performance on noisy data.
We aim to learn representations which capture different factors of variation in latent subspaces.
arXiv Detail & Related papers (2021-12-29T18:55:17Z) - Learning Conditional Invariance through Cycle Consistency [60.85059977904014]
We propose a novel approach to identify meaningful and independent factors of variation in a dataset.
Our method involves two separate latent subspaces for the target property and the remaining input information.
We demonstrate on synthetic and molecular data that our approach identifies more meaningful factors which lead to sparser and more interpretable models.
arXiv Detail & Related papers (2021-11-25T17:33:12Z) - Learning Multimodal VAEs through Mutual Supervision [72.77685889312889]
MEME combines information between modalities implicitly through mutual supervision.
We demonstrate that MEME outperforms baselines on standard metrics across both partial and complete observation schemes.
arXiv Detail & Related papers (2021-06-23T17:54:35Z) - OR-Net: Pointwise Relational Inference for Data Completion under Partial
Observation [51.083573770706636]
This work uses relational inference to fill in the incomplete data.
We propose Omni-Relational Network (OR-Net) to model the pointwise relativity in two aspects.
arXiv Detail & Related papers (2021-05-02T06:05:54Z) - On Disentangled Representations Learned From Correlated Data [59.41587388303554]
We bridge the gap to real-world scenarios by analyzing the behavior of the most prominent disentanglement approaches on correlated data.
We show that systematically induced correlations in the dataset are being learned and reflected in the latent representations.
We also demonstrate how to resolve these latent correlations, either using weak supervision during training or by post-hoc correcting a pre-trained model with a small number of labels.
arXiv Detail & Related papers (2020-06-14T12:47:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.