Why Are Conditional Generative Models Better Than Unconditional Ones?
- URL: http://arxiv.org/abs/2212.00362v1
- Date: Thu, 1 Dec 2022 08:44:21 GMT
- Title: Why Are Conditional Generative Models Better Than Unconditional Ones?
- Authors: Fan Bao, Chongxuan Li, Jiacheng Sun, Jun Zhu
- Abstract summary: We propose self-conditioned diffusion models (SCDM), which is trained conditioned on indices clustered by the k-means algorithm on the features extracted by a model pre-trained in a self-supervised manner.
SCDM significantly achieves the unconditional model across various datasets and a record-breaking FID of 3.94 on ImageNet 64x64 without labels.
- Score: 36.870497480570776
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Extensive empirical evidence demonstrates that conditional generative models
are easier to train and perform better than unconditional ones by exploiting
the labels of data. So do score-based diffusion models. In this paper, we
analyze the phenomenon formally and identify that the key of conditional
learning is to partition the data properly. Inspired by the analyses, we
propose self-conditioned diffusion models (SCDM), which is trained conditioned
on indices clustered by the k-means algorithm on the features extracted by a
model pre-trained in a self-supervised manner. SCDM significantly improves the
unconditional model across various datasets and achieves a record-breaking FID
of 3.94 on ImageNet 64x64 without labels. Besides, SCDM achieves a slightly
better FID than the corresponding conditional model on CIFAR10.
Related papers
- Towards a Theoretical Understanding of Memorization in Diffusion Models [76.85077961718875]
Diffusion probabilistic models (DPMs) are being employed as mainstream models for Generative Artificial Intelligence (GenAI)
We provide a theoretical understanding of memorization in both conditional and unconditional DPMs under the assumption of model convergence.
We propose a novel data extraction method named textbfSurrogate condItional Data Extraction (SIDE) that leverages a time-dependent classifier trained on the generated data as a surrogate condition to extract training data from unconditional DPMs.
arXiv Detail & Related papers (2024-10-03T13:17:06Z) - Don't drop your samples! Coherence-aware training benefits Conditional diffusion [17.349357521783062]
Coherence-Aware Diffusion (CAD) is a novel method that integrates coherence in conditional information into diffusion models.
We show that CAD is theoretically sound and empirically effective on various conditional generation tasks.
arXiv Detail & Related papers (2024-05-30T17:57:26Z) - Training Data Protection with Compositional Diffusion Models [99.46239561159953]
Compartmentalized Diffusion Models (CDM) are a method to train different diffusion models (or prompts) on distinct data sources.
Individual models can be trained in isolation, at different times, and on different distributions and domains.
Each model only contains information about a subset of the data it was exposed to during training, enabling several forms of training data protection.
arXiv Detail & Related papers (2023-08-02T23:27:49Z) - Consistent Diffusion Models: Mitigating Sampling Drift by Learning to be
Consistent [97.64313409741614]
We propose to enforce a emphconsistency property which states that predictions of the model on its own generated data are consistent across time.
We show that our novel training objective yields state-of-the-art results for conditional and unconditional generation in CIFAR-10 and baseline improvements in AFHQ and FFHQ.
arXiv Detail & Related papers (2023-02-17T18:45:04Z) - SRoUDA: Meta Self-training for Robust Unsupervised Domain Adaptation [25.939292305808934]
Unsupervised domain adaptation (UDA) can transfer knowledge learned from rich-label dataset to unlabeled target dataset.
In this paper, we present a new meta self-training pipeline, named SRoUDA, for improving adversarial robustness of UDA models.
arXiv Detail & Related papers (2022-12-12T14:25:40Z) - Autoregressive Diffusion Models [34.125045462636386]
We introduce Autoregressive Diffusion Models (ARDMs), a model class encompassing and generalizing order-agnostic autoregressive models.
ARDMs are simple to implement and easy to train, and can be trained using an efficient objective similar to modern probabilistic diffusion models.
We show that ARDMs obtain compelling results not only on complete datasets, but also on compressing single data points.
arXiv Detail & Related papers (2021-10-05T13:36:55Z) - Contrastive Model Inversion for Data-Free Knowledge Distillation [60.08025054715192]
We propose Contrastive Model Inversion, where the data diversity is explicitly modeled as an optimizable objective.
Our main observation is that, under the constraint of the same amount of data, higher data diversity usually indicates stronger instance discrimination.
Experiments on CIFAR-10, CIFAR-100, and Tiny-ImageNet demonstrate that CMI achieves significantly superior performance when the generated data are used for knowledge distillation.
arXiv Detail & Related papers (2021-05-18T15:13:00Z) - Autoregressive Score Matching [113.4502004812927]
We propose autoregressive conditional score models (AR-CSM) where we parameterize the joint distribution in terms of the derivatives of univariable log-conditionals (scores)
For AR-CSM models, this divergence between data and model distributions can be computed and optimized efficiently, requiring no expensive sampling or adversarial training.
We show with extensive experimental results that it can be applied to density estimation on synthetic data, image generation, image denoising, and training latent variable models with implicit encoders.
arXiv Detail & Related papers (2020-10-24T07:01:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.