DisDiff: Unsupervised Disentanglement of Diffusion Probabilistic Models
- URL: http://arxiv.org/abs/2301.13721v3
- Date: Sat, 28 Oct 2023 11:21:47 GMT
- Title: DisDiff: Unsupervised Disentanglement of Diffusion Probabilistic Models
- Authors: Tao Yang, Yuwang Wang, Yan Lv, Nanning Zheng
- Abstract summary: We propose a new task, disentanglement of Diffusion Probabilistic Models (DPMs)
The task is to automatically discover the inherent factors behind the observations and disentangle the gradient fields of DPM into sub-gradient fields.
We devise an unsupervised approach named DisDiff, achieving disentangled representation learning in the framework of DPMs.
- Score: 42.58375679841317
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Targeting to understand the underlying explainable factors behind
observations and modeling the conditional generation process on these factors,
we connect disentangled representation learning to Diffusion Probabilistic
Models (DPMs) to take advantage of the remarkable modeling ability of DPMs. We
propose a new task, disentanglement of (DPMs): given a pre-trained DPM, without
any annotations of the factors, the task is to automatically discover the
inherent factors behind the observations and disentangle the gradient fields of
DPM into sub-gradient fields, each conditioned on the representation of each
discovered factor. With disentangled DPMs, those inherent factors can be
automatically discovered, explicitly represented, and clearly injected into the
diffusion process via the sub-gradient fields. To tackle this task, we devise
an unsupervised approach named DisDiff, achieving disentangled representation
learning in the framework of DPMs. Extensive experiments on synthetic and
real-world datasets demonstrate the effectiveness of DisDiff.
Related papers
- Towards a Theoretical Understanding of Memorization in Diffusion Models [76.85077961718875]
Diffusion probabilistic models (DPMs) are being employed as mainstream models for Generative Artificial Intelligence (GenAI)
We provide a theoretical understanding of memorization in both conditional and unconditional DPMs under the assumption of model convergence.
We propose a novel data extraction method named textbfSurrogate condItional Data Extraction (SIDE) that leverages a time-dependent classifier trained on the generated data as a surrogate condition to extract training data from unconditional DPMs.
arXiv Detail & Related papers (2024-10-03T13:17:06Z) - Contractive Diffusion Probabilistic Models [5.217870815854702]
Diffusion probabilistic models (DPMs) have emerged as a promising technique in generative modeling.
We propose a new criterion -- the contraction property of backward sampling in the design of DPMs, leading to a novel class of contractive DPMs (CDPMs)
We show that CDPM can leverage weights of pretrained DPMs by a simple transformation, and does not need retraining.
arXiv Detail & Related papers (2024-01-23T21:51:51Z) - Mitigating Shortcut Learning with Diffusion Counterfactuals and Diverse Ensembles [95.49699178874683]
We propose DiffDiv, an ensemble diversification framework exploiting Diffusion Probabilistic Models (DPMs)
We show that DPMs can generate images with novel feature combinations, even when trained on samples displaying correlated input features.
We show that DPM-guided diversification is sufficient to remove dependence on shortcut cues, without a need for additional supervised signals.
arXiv Detail & Related papers (2023-11-23T15:47:33Z) - Energy-Based Models for Anomaly Detection: A Manifold Diffusion Recovery
Approach [12.623417770432146]
We present a new method of training energy-based models (EBMs) for anomaly detection that leverages low-dimensional structures within data.
The proposed algorithm, Manifold Projection-Diffusion Recovery (MPDR), first perturbs a data point along a low-dimensional manifold that approximates the training dataset.
Experimental results show that MPDR exhibits strong performance across various anomaly detection tasks involving diverse data types.
arXiv Detail & Related papers (2023-10-28T11:18:39Z) - Diffusion Model as Representation Learner [86.09969334071478]
Diffusion Probabilistic Models (DPMs) have recently demonstrated impressive results on various generative tasks.
We propose a novel knowledge transfer method that leverages the knowledge acquired by DPMs for recognition tasks.
arXiv Detail & Related papers (2023-08-21T00:38:39Z) - AdjointDPM: Adjoint Sensitivity Method for Gradient Backpropagation of Diffusion Probabilistic Models [103.41269503488546]
Existing customization methods require access to multiple reference examples to align pre-trained diffusion probabilistic models with user-provided concepts.
This paper aims to address the challenge of DPM customization when the only available supervision is a differentiable metric defined on the generated contents.
We propose a novel method AdjointDPM, which first generates new samples from diffusion models by solving the corresponding probability-flow ODEs.
It then uses the adjoint sensitivity method to backpropagate the gradients of the loss to the models' parameters.
arXiv Detail & Related papers (2023-07-20T09:06:21Z) - Robust Face Anti-Spoofing with Dual Probabilistic Modeling [49.14353429234298]
We propose a unified framework called Dual Probabilistic Modeling (DPM), with two dedicated modules, DPM-LQ (Label Quality aware learning) and DPM-DQ (Data Quality aware learning)
DPM-LQ is able to produce robust feature representations without overfitting to the distribution of noisy semantic labels.
DPM-DQ can eliminate data noise from False Reject' and False Accept' during inference by correcting the prediction confidence of noisy data based on its quality distribution.
arXiv Detail & Related papers (2022-04-27T03:44:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.