Diffusion Autoencoders: Toward a Meaningful and Decodable Representation
- URL: http://arxiv.org/abs/2111.15640v2
- Date: Wed, 1 Dec 2021 15:28:29 GMT
- Title: Diffusion Autoencoders: Toward a Meaningful and Decodable Representation
- Authors: Konpat Preechakul, Nattanat Chatthee, Suttisak Wizadwongsa, Supasorn
Suwajanakorn
- Abstract summary: Diffusion models (DPMs) have achieved remarkable quality in image generation that rivals GANs'.
Unlike GANs, DPMs use a set of latent variables that lack semantic meaning and cannot serve as a useful representation for other tasks.
This paper explores the possibility of using DPMs for representation learning and seeks to extract a meaningful and decodable representation of an input image via autoencoding.
- Score: 1.471992435706872
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Diffusion probabilistic models (DPMs) have achieved remarkable quality in
image generation that rivals GANs'. But unlike GANs, DPMs use a set of latent
variables that lack semantic meaning and cannot serve as a useful
representation for other tasks. This paper explores the possibility of using
DPMs for representation learning and seeks to extract a meaningful and
decodable representation of an input image via autoencoding. Our key idea is to
use a learnable encoder for discovering the high-level semantics, and a DPM as
the decoder for modeling the remaining stochastic variations. Our method can
encode any image into a two-part latent code, where the first part is
semantically meaningful and linear, and the second part captures stochastic
details, allowing near-exact reconstruction. This capability enables
challenging applications that currently foil GAN-based methods, such as
attribute manipulation on real images. We also show that this two-level
encoding improves denoising efficiency and naturally facilitates various
downstream tasks including few-shot conditional sampling. Please visit our
project page: https://Diff-AE.github.io/
Related papers
- Reinforcement Learning from Diffusion Feedback: Q* for Image Search [2.5835347022640254]
We present two models for image generation using model-agnostic learning.
RLDF is a singular approach for visual imitation through prior-preserving reward function guidance.
It generates high-quality images over varied domains showcasing class-consistency and strong visual diversity.
arXiv Detail & Related papers (2023-11-27T09:20:12Z) - Pink: Unveiling the Power of Referential Comprehension for Multi-modal
LLMs [49.88461345825586]
This paper proposes a new framework to enhance the fine-grained image understanding abilities of MLLMs.
We present a new method for constructing the instruction tuning dataset at a low cost by leveraging annotations in existing datasets.
We show that our model exhibits a 5.2% accuracy improvement over Qwen-VL and surpasses the accuracy of Kosmos-2 by 24.7%.
arXiv Detail & Related papers (2023-10-01T05:53:15Z) - DiffuseGAE: Controllable and High-fidelity Image Manipulation from
Disentangled Representation [14.725538019917625]
Diffusion probabilistic models (DPMs) have shown remarkable results on various image synthesis tasks.
DPMs lack a low-dimensional, interpretable, and well-decoupled latent code.
We propose Diff-AE to explore the potential of DPMs for representation learning via autoencoding.
arXiv Detail & Related papers (2023-07-12T04:11:08Z) - Towards Accurate Image Coding: Improved Autoregressive Image Generation
with Dynamic Vector Quantization [73.52943587514386]
Existing vector quantization (VQ) based autoregressive models follow a two-stage generation paradigm.
We propose a novel two-stage framework: (1) Dynamic-Quantization VAE (DQ-VAE) which encodes image regions into variable-length codes based their information densities for accurate representation.
arXiv Detail & Related papers (2023-05-19T14:56:05Z) - Unsupervised Representation Learning from Pre-trained Diffusion
Probabilistic Models [83.75414370493289]
Diffusion Probabilistic Models (DPMs) have shown a powerful capacity of generating high-quality image samples.
Diff-AE have been proposed to explore DPMs for representation learning via autoencoding.
We propose textbfPre-trained textbfAutotextbfEncoding (textbfPDAE) to adapt existing pre-trained DPMs to the decoders for image reconstruction.
arXiv Detail & Related papers (2022-12-26T02:37:38Z) - Rethinking the Paradigm of Content Constraints in Unpaired
Image-to-Image Translation [9.900050049833986]
We propose EnCo, a simple but efficient way to maintain the content by constraining the representational similarity in the latent space of patch-level features.
For the similarity function, we use a simple MSE loss instead of contrastive loss, which is currently widely used in I2I tasks.
In addition, we rethink the role played by discriminators in sampling patches and propose a discnative attention-guided (DAG) patch sampling strategy to replace random sampling.
arXiv Detail & Related papers (2022-11-20T04:39:57Z) - Lossy Image Compression with Conditional Diffusion Models [25.158390422252097]
This paper outlines an end-to-end optimized lossy image compression framework using diffusion generative models.
In contrast to VAE-based neural compression, where the (mean) decoder is a deterministic neural network, our decoder is a conditional diffusion model.
Our approach yields stronger reported FID scores than the GAN-based model, while also yielding competitive performance with VAE-based models in several distortion metrics.
arXiv Detail & Related papers (2022-09-14T21:53:27Z) - Masked Autoencoders Are Scalable Vision Learners [60.97703494764904]
Masked autoencoders (MAE) are scalable self-supervised learners for computer vision.
Our MAE approach is simple: we mask random patches of the input image and reconstruct the missing pixels.
Coupling these two designs enables us to train large models efficiently and effectively.
arXiv Detail & Related papers (2021-11-11T18:46:40Z) - Layout-to-Image Translation with Double Pooling Generative Adversarial
Networks [76.83075646527521]
We propose a novel Double Pooing GAN (DPGAN) for generating photo-realistic and semantically-consistent results from the input layout.
We also propose a novel Double Pooling Module (DPM), which consists of the Square-shape Pooling Module (SPM) and the Rectangle-shape Pooling Module ( RPM)
arXiv Detail & Related papers (2021-08-29T19:55:14Z) - Unpaired Image-to-Image Translation via Latent Energy Transport [61.62293304236371]
Image-to-image translation aims to preserve source contents while translating to discriminative target styles between two visual domains.
In this paper, we propose to deploy an energy-based model (EBM) in the latent space of a pretrained autoencoder for this task.
Our model is the first to be applicable to 1024$times$1024-resolution unpaired image translation.
arXiv Detail & Related papers (2020-12-01T17:18:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.