Rethinking Content and Style: Exploring Bias for Unsupervised
Disentanglement
- URL: http://arxiv.org/abs/2102.10544v1
- Date: Sun, 21 Feb 2021 08:04:33 GMT
- Title: Rethinking Content and Style: Exploring Bias for Unsupervised
Disentanglement
- Authors: Xuanchi Ren, Tao Yang, Yuwang Wang, Wenjun Zeng
- Abstract summary: We propose a formulation for unsupervised C-S disentanglement based on our assumption that different factors are of different importance and popularity for image reconstruction.
The corresponding model inductive bias is introduced by our proposed C-S disentanglement Module (C-S DisMo)
Experiments on several popular datasets demonstrate that our method achieves the state-of-the-art unsupervised C-S disentanglement.
- Score: 59.033559925639075
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Content and style (C-S) disentanglement intends to decompose the underlying
explanatory factors of objects into two independent subspaces. From the
unsupervised disentanglement perspective, we rethink content and style and
propose a formulation for unsupervised C-S disentanglement based on our
assumption that different factors are of different importance and popularity
for image reconstruction, which serves as a data bias. The corresponding model
inductive bias is introduced by our proposed C-S disentanglement Module (C-S
DisMo), which assigns different and independent roles to content and style when
approximating the real data distributions. Specifically, each content embedding
from the dataset, which encodes the most dominant factors for image
reconstruction, is assumed to be sampled from a shared distribution across the
dataset. The style embedding for a particular image, encoding the remaining
factors, is used to customize the shared distribution through an affine
transformation. The experiments on several popular datasets demonstrate that
our method achieves the state-of-the-art unsupervised C-S disentanglement,
which is comparable or even better than supervised methods. We verify the
effectiveness of our method by downstream tasks: domain translation and
single-view 3D reconstruction. Project page at
https://github.com/xrenaa/CS-DisMo.
Related papers
- Common-Sense Bias Discovery and Mitigation for Classification Tasks [16.8259488742528]
We propose a framework to extract feature clusters in a dataset based on image descriptions.
The analyzed features and correlations are human-interpretable, so we name the method Common-Sense Bias Discovery (CSBD)
Experiments show that our method discovers novel biases on multiple classification tasks for two benchmark image datasets.
arXiv Detail & Related papers (2024-01-24T03:56:07Z) - Disentanglement via Latent Quantization [60.37109712033694]
In this work, we construct an inductive bias towards encoding to and decoding from an organized latent space.
We demonstrate the broad applicability of this approach by adding it to both basic data-re (vanilla autoencoder) and latent-reconstructing (InfoGAN) generative models.
arXiv Detail & Related papers (2023-05-28T06:30:29Z) - Self-Conditioned Generative Adversarial Networks for Image Editing [61.50205580051405]
Generative Adversarial Networks (GANs) are susceptible to bias, learned from either the unbalanced data, or through mode collapse.
We argue that this bias is responsible not only for fairness concerns, but that it plays a key role in the collapse of latent-traversal editing methods when deviating away from the distribution's core.
arXiv Detail & Related papers (2022-02-08T18:08:24Z) - Self-supervised Correlation Mining Network for Person Image Generation [9.505343361614928]
Person image generation aims to perform non-rigid deformation on source images.
We propose a Self-supervised Correlation Mining Network (SCM-Net) to rearrange the source images in the feature space.
For improving the fidelity of cross-scale pose transformation, we propose a graph based Body Structure Retaining Loss.
arXiv Detail & Related papers (2021-11-26T03:57:46Z) - Multi-Attribute Balanced Sampling for Disentangled GAN Controls [0.0]
Various controls over the generated data can be extracted from the latent space of a pre-trained GAN.
We show that this approach outperforms state-of-the-art classifier-based methods while avoiding the need for disentanglement-enforcing post-processing.
arXiv Detail & Related papers (2021-10-28T08:44:13Z) - There and back again: Cycle consistency across sets for isolating
factors of variation [43.59036597872957]
We operate in the setting where limited information is known about the data in the form of groupings.
Our goal is to learn representations which isolate the factors of variation that are common across the groupings.
arXiv Detail & Related papers (2021-03-04T18:58:45Z) - Out-of-distribution Generalization via Partial Feature Decorrelation [72.96261704851683]
We present a novel Partial Feature Decorrelation Learning (PFDL) algorithm, which jointly optimize a feature decomposition network and the target image classification model.
The experiments on real-world datasets demonstrate that our method can improve the backbone model's accuracy on OOD image classification datasets.
arXiv Detail & Related papers (2020-07-30T05:48:48Z) - Understanding Adversarial Examples from the Mutual Influence of Images
and Perturbations [83.60161052867534]
We analyze adversarial examples by disentangling the clean images and adversarial perturbations, and analyze their influence on each other.
Our results suggest a new perspective towards the relationship between images and universal perturbations.
We are the first to achieve the challenging task of a targeted universal attack without utilizing original training data.
arXiv Detail & Related papers (2020-07-13T05:00:09Z) - Learning to Manipulate Individual Objects in an Image [71.55005356240761]
We describe a method to train a generative model with latent factors that are independent and localized.
This means that perturbing the latent variables affects only local regions of the synthesized image, corresponding to objects.
Unlike other unsupervised generative models, ours enables object-centric manipulation, without requiring object-level annotations.
arXiv Detail & Related papers (2020-04-11T21:50:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.