Diffusion Models Without Attention
- URL: http://arxiv.org/abs/2311.18257v1
- Date: Thu, 30 Nov 2023 05:15:35 GMT
- Title: Diffusion Models Without Attention
- Authors: Jing Nathan Yan, Jiatao Gu, Alexander M. Rush
- Abstract summary: Diffusion State Space Model (DiffuSSM) is an architecture that supplants attention mechanisms with a more scalable state space model backbone.
Our focus on FLOP-efficient architectures in diffusion training marks a significant step forward.
- Score: 110.5623058129782
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent advancements in high-fidelity image generation, Denoising Diffusion
Probabilistic Models (DDPMs) have emerged as a key player. However, their
application at high resolutions presents significant computational challenges.
Current methods, such as patchifying, expedite processes in UNet and
Transformer architectures but at the expense of representational capacity.
Addressing this, we introduce the Diffusion State Space Model (DiffuSSM), an
architecture that supplants attention mechanisms with a more scalable state
space model backbone. This approach effectively handles higher resolutions
without resorting to global compression, thus preserving detailed image
representation throughout the diffusion process. Our focus on FLOP-efficient
architectures in diffusion training marks a significant step forward.
Comprehensive evaluations on both ImageNet and LSUN datasets at two resolutions
demonstrate that DiffuSSMs are on par or even outperform existing diffusion
models with attention modules in FID and Inception Score metrics while
significantly reducing total FLOP usage.
Related papers
- Binarized Diffusion Model for Image Super-Resolution [61.963833405167875]
Binarization, an ultra-compression algorithm, offers the potential for effectively accelerating advanced diffusion models (DMs)
Existing binarization methods result in significant performance degradation.
We introduce a novel binarized diffusion model, BI-DiffSR, for image SR.
arXiv Detail & Related papers (2024-06-09T10:30:25Z) - DeeDSR: Towards Real-World Image Super-Resolution via Degradation-Aware Stable Diffusion [27.52552274944687]
We introduce a novel two-stage, degradation-aware framework that enhances the diffusion model's ability to recognize content and degradation in low-resolution images.
In the first stage, we employ unsupervised contrastive learning to obtain representations of image degradations.
In the second stage, we integrate a degradation-aware module into a simplified ControlNet, enabling flexible adaptation to various degradations.
arXiv Detail & Related papers (2024-03-31T12:07:04Z) - LLDiffusion: Learning Degradation Representations in Diffusion Models
for Low-Light Image Enhancement [118.83316133601319]
Current deep learning methods for low-light image enhancement (LLIE) typically rely on pixel-wise mapping learned from paired data.
We propose a degradation-aware learning scheme for LLIE using diffusion models, which effectively integrates degradation and image priors into the diffusion process.
arXiv Detail & Related papers (2023-07-27T07:22:51Z) - Low-Light Image Enhancement with Wavelet-based Diffusion Models [50.632343822790006]
Diffusion models have achieved promising results in image restoration tasks, yet suffer from time-consuming, excessive computational resource consumption, and unstable restoration.
We propose a robust and efficient Diffusion-based Low-Light image enhancement approach, dubbed DiffLL.
arXiv Detail & Related papers (2023-06-01T03:08:28Z) - Hierarchical Integration Diffusion Model for Realistic Image Deblurring [71.76410266003917]
Diffusion models (DMs) have been introduced in image deblurring and exhibited promising performance.
We propose the Hierarchical Integration Diffusion Model (HI-Diff), for realistic image deblurring.
Experiments on synthetic and real-world blur datasets demonstrate that our HI-Diff outperforms state-of-the-art methods.
arXiv Detail & Related papers (2023-05-22T12:18:20Z) - Implicit Diffusion Models for Continuous Super-Resolution [65.45848137914592]
This paper introduces an Implicit Diffusion Model (IDM) for high-fidelity continuous image super-resolution.
IDM integrates an implicit neural representation and a denoising diffusion model in a unified end-to-end framework.
The scaling factor regulates the resolution and accordingly modulates the proportion of the LR information and generated features in the final output.
arXiv Detail & Related papers (2023-03-29T07:02:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.