NestedVAE: Isolating Common Factors via Weak Supervision
- URL: http://arxiv.org/abs/2002.11576v1
- Date: Wed, 26 Feb 2020 15:49:57 GMT
- Title: NestedVAE: Isolating Common Factors via Weak Supervision
- Authors: Matthew J. Vowels, Necati Cihan Camgoz and Richard Bowden
- Abstract summary: We identify the connection between the task of bias reduction and that of isolating factors common between domains.
To isolate the common factors we combine the theory of deep latent variable models with information bottleneck theory.
Two outer VAEs with shared weights attempt to reconstruct the input and infer a latent space, whilst a nested VAE attempts to reconstruct the latent representation of one image, from the latent representation of its paired image.
- Score: 45.366986365879505
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fair and unbiased machine learning is an important and active field of
research, as decision processes are increasingly driven by models that learn
from data. Unfortunately, any biases present in the data may be learned by the
model, thereby inappropriately transferring that bias into the decision making
process. We identify the connection between the task of bias reduction and that
of isolating factors common between domains whilst encouraging domain specific
invariance. To isolate the common factors we combine the theory of deep latent
variable models with information bottleneck theory for scenarios whereby data
may be naturally paired across domains and no additional supervision is
required. The result is the Nested Variational AutoEncoder (NestedVAE). Two
outer VAEs with shared weights attempt to reconstruct the input and infer a
latent space, whilst a nested VAE attempts to reconstruct the latent
representation of one image, from the latent representation of its paired
image. In so doing, the nested VAE isolates the common latent factors/causes
and becomes invariant to unwanted factors that are not shared between paired
images. We also propose a new metric to provide a balanced method of evaluating
consistency and classifier performance across domains which we refer to as the
Adjusted Parity metric. An evaluation of NestedVAE on both domain and attribute
invariance, change detection, and learning common factors for the prediction of
biological sex demonstrates that NestedVAE significantly outperforms
alternative methods.
Related papers
- Counterfactual Fairness through Transforming Data Orthogonal to Bias [7.109458605736819]
We propose a novel data pre-processing algorithm, Orthogonal to Bias (OB)
OB is designed to eliminate the influence of a group of continuous sensitive variables, thus promoting counterfactual fairness in machine learning applications.
OB is model-agnostic, making it applicable to a wide range of machine learning models and tasks.
arXiv Detail & Related papers (2024-03-26T16:40:08Z) - Causal Inference via Style Transfer for Out-of-distribution
Generalisation [10.998592702137858]
Out-of-distribution generalisation aims to build a model that can generalise well on an unseen target domain.
We propose a novel method that effectively deals with hidden confounders by successfully implementing front-door adjustment.
arXiv Detail & Related papers (2022-12-06T15:43:54Z) - On the Strong Correlation Between Model Invariance and Generalization [54.812786542023325]
Generalization captures a model's ability to classify unseen data.
Invariance measures consistency of model predictions on transformations of the data.
From a dataset-centric view, we find a certain model's accuracy and invariance linearly correlated on different test sets.
arXiv Detail & Related papers (2022-07-14T17:08:25Z) - Learning Conditional Invariance through Cycle Consistency [60.85059977904014]
We propose a novel approach to identify meaningful and independent factors of variation in a dataset.
Our method involves two separate latent subspaces for the target property and the remaining input information.
We demonstrate on synthetic and molecular data that our approach identifies more meaningful factors which lead to sparser and more interpretable models.
arXiv Detail & Related papers (2021-11-25T17:33:12Z) - Regularizing Variational Autoencoder with Diversity and Uncertainty
Awareness [61.827054365139645]
Variational Autoencoder (VAE) approximates the posterior of latent variables based on amortized variational inference.
We propose an alternative model, DU-VAE, for learning a more Diverse and less Uncertain latent space.
arXiv Detail & Related papers (2021-10-24T07:58:13Z) - Instrumental Variable-Driven Domain Generalization with Unobserved
Confounders [53.735614014067394]
Domain generalization (DG) aims to learn from multiple source domains a model that can generalize well on unseen target domains.
We propose an instrumental variable-driven DG method (IV-DG) by removing the bias of the unobserved confounders with two-stage learning.
In the first stage, it learns the conditional distribution of the input features of one domain given input features of another domain.
In the second stage, it estimates the relationship by predicting labels with the learned conditional distribution.
arXiv Detail & Related papers (2021-10-04T13:32:57Z) - Learning Disentangled Representations with Latent Variation
Predictability [102.4163768995288]
This paper defines the variation predictability of latent disentangled representations.
Within an adversarial generation process, we encourage variation predictability by maximizing the mutual information between latent variations and corresponding image pairs.
We develop an evaluation metric that does not rely on the ground-truth generative factors to measure the disentanglement of latent representations.
arXiv Detail & Related papers (2020-07-25T08:54:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.