Spatial-Frequency U-Net for Denoising Diffusion Probabilistic Models
- URL: http://arxiv.org/abs/2307.14648v1
- Date: Thu, 27 Jul 2023 06:53:16 GMT
- Title: Spatial-Frequency U-Net for Denoising Diffusion Probabilistic Models
- Authors: Xin Yuan, Linjie Li, Jianfeng Wang, Zhengyuan Yang, Kevin Lin, Zicheng
Liu and Lijuan Wang
- Abstract summary: We study the denoising diffusion probabilistic model (DDPM) in wavelet space, instead of pixel space, for visual synthesis.
By explicitly modeling the wavelet signals, we find our model is able to generate images with higher quality on several datasets.
- Score: 89.76587063609806
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we study the denoising diffusion probabilistic model (DDPM) in
wavelet space, instead of pixel space, for visual synthesis. Considering the
wavelet transform represents the image in spatial and frequency domains, we
carefully design a novel architecture SFUNet to effectively capture the
correlation for both domains. Specifically, in the standard denoising U-Net for
pixel data, we supplement the 2D convolutions and spatial-only attention layers
with our spatial frequency-aware convolution and attention modules to jointly
model the complementary information from spatial and frequency domains in
wavelet data. Our new architecture can be used as a drop-in replacement to the
pixel-based network and is compatible with the vanilla DDPM training process.
By explicitly modeling the wavelet signals, we find our model is able to
generate images with higher quality on CIFAR-10, FFHQ, LSUN-Bedroom, and
LSUN-Church datasets, than the pixel-based counterpart.
Related papers
- MDNF: Multi-Diffusion-Nets for Neural Fields on Meshes [5.284425534494986]
We propose a novel framework for representing neural fields on triangle meshes that is multi-resolution across both spatial and frequency domains.
Inspired by the Neural Fourier Filter Bank (NFFB), our architecture decomposes the frequencies and frequency domains by associating finer resolution levels with higher frequency bands.
We demonstrate the effectiveness of our approach through its application to diverse neural fields, such as synthetic RGB functions, UV texture coordinates, and normals.
arXiv Detail & Related papers (2024-09-04T19:08:13Z) - Hybrid Convolutional and Attention Network for Hyperspectral Image Denoising [54.110544509099526]
Hyperspectral image (HSI) denoising is critical for the effective analysis and interpretation of hyperspectral data.
We propose a hybrid convolution and attention network (HCANet) to enhance HSI denoising.
Experimental results on mainstream HSI datasets demonstrate the rationality and effectiveness of the proposed HCANet.
arXiv Detail & Related papers (2024-03-15T07:18:43Z) - SpACNN-LDVAE: Spatial Attention Convolutional Latent Dirichlet Variational Autoencoder for Hyperspectral Pixel Unmixing [1.8024397171920885]
This work extends the Latent Dirichlet Variational Autoencoder (LDVAE) pixel unmixing scheme by taking into account local spatial context.
The proposed method uses an isotropic convolutional neural network with spatial attention to encode pixels as a dirichlet distribution over endmembers.
arXiv Detail & Related papers (2023-11-17T18:45:00Z) - Harnessing the Spatial-Temporal Attention of Diffusion Models for
High-Fidelity Text-to-Image Synthesis [59.10787643285506]
Diffusion-based models have achieved state-of-the-art performance on text-to-image synthesis tasks.
One critical limitation of these models is the low fidelity of generated images with respect to the text description.
We propose a new text-to-image algorithm that adds explicit control over spatial-temporal cross-attention in diffusion models.
arXiv Detail & Related papers (2023-04-07T23:49:34Z) - Efficient Frequency Domain-based Transformers for High-Quality Image
Deblurring [39.720032882926176]
We present an effective and efficient method that explores the properties of Transformers in the frequency domain for high-quality image deblurring.
We formulate the proposed FSAS and DFFN into an asymmetrical network based on an encoder and decoder architecture.
arXiv Detail & Related papers (2022-11-22T13:08:03Z) - DPFNet: A Dual-branch Dilated Network with Phase-aware Fourier
Convolution for Low-light Image Enhancement [1.2645663389012574]
Low-light image enhancement is a classical computer vision problem aiming to recover normal-exposure images from low-light images.
convolutional neural networks commonly used in this field are good at sampling low-frequency local structural features in the spatial domain.
We propose a novel module using the Fourier coefficients, which can recover high-quality texture details under the constraint of semantics in the frequency phase.
arXiv Detail & Related papers (2022-09-16T13:56:09Z) - FreqNet: A Frequency-domain Image Super-Resolution Network with Dicrete
Cosine Transform [16.439669339293747]
Single image super-resolution(SISR) is an ill-posed problem that aims to obtain high-resolution (HR) output from low-resolution (LR) input.
Despite the high peak signal-to-noise ratios(PSNR) results, it is difficult to determine whether the model correctly adds desired high-frequency details.
We propose FreqNet, an intuitive pipeline from the frequency domain perspective, to solve this problem.
arXiv Detail & Related papers (2021-11-21T11:49:12Z) - Wavelet-Based Network For High Dynamic Range Imaging [64.66969585951207]
Existing methods, such as optical flow based and end-to-end deep learning based solutions, are error-prone either in detail restoration or ghosting artifacts removal.
In this work, we propose a novel frequency-guided end-to-end deep neural network (FNet) to conduct HDR fusion in the frequency domain, and Wavelet Transform (DWT) is used to decompose inputs into different frequency bands.
The low-frequency signals are used to avoid specific ghosting artifacts, while the high-frequency signals are used for preserving details.
arXiv Detail & Related papers (2021-08-03T12:26:33Z) - Learning Spatial and Spatio-Temporal Pixel Aggregations for Image and
Video Denoising [104.59305271099967]
We present a pixel aggregation network and learn the pixel sampling and averaging strategies for image denoising.
We develop a pixel aggregation network for video denoising to sample pixels across the spatial-temporal space.
Our method is able to solve the misalignment issues caused by large motion in dynamic scenes.
arXiv Detail & Related papers (2021-01-26T13:00:46Z) - Wavelet Integrated CNNs for Noise-Robust Image Classification [51.18193090255933]
We enhance CNNs by replacing max-pooling, strided-convolution, and average-pooling with Discrete Wavelet Transform (DWT)
WaveCNets, the wavelet integrated versions of VGG, ResNets, and DenseNet, achieve higher accuracy and better noise-robustness than their vanilla versions.
arXiv Detail & Related papers (2020-05-07T09:10:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.