Refining Generative Process with Discriminator Guidance in Score-based
Diffusion Models
- URL: http://arxiv.org/abs/2211.17091v4
- Date: Sun, 4 Jun 2023 22:19:27 GMT
- Title: Refining Generative Process with Discriminator Guidance in Score-based
Diffusion Models
- Authors: Dongjun Kim, Yeongmin Kim, Se Jung Kwon, Wanmo Kang, Il-Chul Moon
- Abstract summary: Discriminator Guidance aims to improve sample generation of pre-trained diffusion models.
Unlike GANs, our approach does not require joint training of score and discriminator networks.
We achive state-of-the-art results on ImageNet 256x256 with FID 1.83 and recall 0.64, similar to the validation data's FID (1.68) and recall (0.66).
- Score: 15.571673352656264
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The proposed method, Discriminator Guidance, aims to improve sample
generation of pre-trained diffusion models. The approach introduces a
discriminator that gives explicit supervision to a denoising sample path
whether it is realistic or not. Unlike GANs, our approach does not require
joint training of score and discriminator networks. Instead, we train the
discriminator after score training, making discriminator training stable and
fast to converge. In sample generation, we add an auxiliary term to the
pre-trained score to deceive the discriminator. This term corrects the model
score to the data score at the optimal discriminator, which implies that the
discriminator helps better score estimation in a complementary way. Using our
algorithm, we achive state-of-the-art results on ImageNet 256x256 with FID 1.83
and recall 0.64, similar to the validation data's FID (1.68) and recall (0.66).
We release the code at https://github.com/alsdudrla10/DG.
Related papers
- Rejection via Learning Density Ratios [50.91522897152437]
Classification with rejection emerges as a learning paradigm which allows models to abstain from making predictions.
We propose a different distributional perspective, where we seek to find an idealized data distribution which maximizes a pretrained model's performance.
Our framework is tested empirically over clean and noisy datasets.
arXiv Detail & Related papers (2024-05-29T01:32:17Z) - Discriminator Guidance for Autoregressive Diffusion Models [12.139222986297264]
We introduce discriminator guidance in the setting of Autoregressive Diffusion Models.
We derive ways of using a discriminator together with a pretrained generative model in the discrete case.
arXiv Detail & Related papers (2023-10-24T13:14:22Z) - Bi-discriminator Domain Adversarial Neural Networks with Class-Level
Gradient Alignment [87.8301166955305]
We propose a novel bi-discriminator domain adversarial neural network with class-level gradient alignment.
BACG resorts to gradient signals and second-order probability estimation for better alignment of domain distributions.
In addition, inspired by contrastive learning, we develop a memory bank-based variant, i.e. Fast-BACG, which can greatly shorten the training process.
arXiv Detail & Related papers (2023-10-21T09:53:17Z) - Dynamically Masked Discriminator for Generative Adversarial Networks [71.33631511762782]
Training Generative Adversarial Networks (GANs) remains a challenging problem.
Discriminator trains the generator by learning the distribution of real/generated data.
We propose a novel method for GANs from the viewpoint of online continual learning.
arXiv Detail & Related papers (2023-06-13T12:07:01Z) - Reusing the Task-specific Classifier as a Discriminator:
Discriminator-free Adversarial Domain Adaptation [55.27563366506407]
We introduce a discriminator-free adversarial learning network (DALN) for unsupervised domain adaptation (UDA)
DALN achieves explicit domain alignment and category distinguishment through a unified objective.
DALN compares favorably against the existing state-of-the-art (SOTA) methods on a variety of public datasets.
arXiv Detail & Related papers (2022-04-08T04:40:18Z) - ELECRec: Training Sequential Recommenders as Discriminators [94.93227906678285]
Sequential recommendation is often considered as a generative task, i.e., training a sequential encoder to generate the next item of a user's interests.
We propose to train the sequential recommenders as discriminators rather than generators.
Our method trains a discriminator to distinguish if a sampled item is a'real' target item or not.
arXiv Detail & Related papers (2022-04-05T06:19:45Z) - Re-using Adversarial Mask Discriminators for Test-time Training under
Distribution Shifts [10.647970046084916]
We argue that training stable discriminators produces expressive loss functions that we can re-use at inference to detect and correct segmentation mistakes.
We show that we can combine discriminators with image reconstruction costs (via decoders) to further improve the model.
Our method is simple and improves the test-time performance of pre-trained GANs.
arXiv Detail & Related papers (2021-08-26T17:31:46Z) - Exploring Dropout Discriminator for Domain Adaptation [27.19677042654432]
Adaptation of a classifier to new domains is one of the challenging problems in machine learning.
We propose a curriculum based dropout discriminator that gradually increases the variance of the sample based distribution.
An ensemble of discriminators helps the model to learn the data distribution efficiently.
arXiv Detail & Related papers (2021-07-09T06:11:34Z) - Out-of-Scope Intent Detection with Self-Supervision and Discriminative
Training [20.242645823965145]
Out-of-scope intent detection is of practical importance in task-oriented dialogue systems.
We propose a method to train an out-of-scope intent classifier in a fully end-to-end manner by simulating the test scenario in training.
We evaluate our method extensively on four benchmark dialogue datasets and observe significant improvements over state-of-the-art approaches.
arXiv Detail & Related papers (2021-06-16T08:17:18Z) - Data-Efficient Instance Generation from Instance Discrimination [40.71055888512495]
We propose a data-efficient Instance Generation (InsGen) method based on instance discrimination.
In this work, we propose a data-efficient Instance Generation (InsGen) method based on instance discrimination.
arXiv Detail & Related papers (2021-06-08T17:52:59Z) - On Positive-Unlabeled Classification in GAN [130.43248168149432]
This paper defines a positive and unlabeled classification problem for standard GANs.
It then leads to a novel technique to stabilize the training of the discriminator in GANs.
arXiv Detail & Related papers (2020-02-04T05:59:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.