SinDiffusion: Learning a Diffusion Model from a Single Natural Image
- URL: http://arxiv.org/abs/2211.12445v1
- Date: Tue, 22 Nov 2022 18:00:03 GMT
- Title: SinDiffusion: Learning a Diffusion Model from a Single Natural Image
- Authors: Weilun Wang, Jianmin Bao, Wengang Zhou, Dongdong Chen, Dong Chen, Lu
Yuan, Houqiang Li
- Abstract summary: We present SinDiffusion, leveraging denoising diffusion models to capture internal distribution of patches from a single natural image.
It is based on two core designs. First, SinDiffusion is trained with a single model at a single scale instead of multiple models with progressive growing of scales.
Second, we identify that a patch-level receptive field of the diffusion network is crucial and effective for capturing the image's patch statistics.
- Score: 159.4285444680301
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present SinDiffusion, leveraging denoising diffusion models to capture
internal distribution of patches from a single natural image. SinDiffusion
significantly improves the quality and diversity of generated samples compared
with existing GAN-based approaches. It is based on two core designs. First,
SinDiffusion is trained with a single model at a single scale instead of
multiple models with progressive growing of scales which serves as the default
setting in prior work. This avoids the accumulation of errors, which cause
characteristic artifacts in generated results. Second, we identify that a
patch-level receptive field of the diffusion network is crucial and effective
for capturing the image's patch statistics, therefore we redesign the network
structure of the diffusion model. Coupling these two designs enables us to
generate photorealistic and diverse images from a single image. Furthermore,
SinDiffusion can be applied to various applications, i.e., text-guided image
generation, and image outpainting, due to the inherent capability of diffusion
models. Extensive experiments on a wide range of images demonstrate the
superiority of our proposed method for modeling the patch distribution.
Related papers
- DiffMorpher: Unleashing the Capability of Diffusion Models for Image
Morphing [28.593023489682654]
We present DiffMorpher, the first approach enabling smooth and natural image morphing using diffusion models.
Our key idea is to capture the semantics of the two images by fitting two LoRAs to them respectively, and interpolate between both the LoRA parameters and the latent noises to ensure a smooth semantic transition.
In addition, we propose an attention and injection technique and a new sampling schedule to further enhance the smoothness between consecutive images.
arXiv Detail & Related papers (2023-12-12T16:28:08Z) - DiffDis: Empowering Generative Diffusion Model with Cross-Modal
Discrimination Capability [75.9781362556431]
We propose DiffDis to unify the cross-modal generative and discriminative pretraining into one single framework under the diffusion process.
We show that DiffDis outperforms single-task models on both the image generation and the image-text discriminative tasks.
arXiv Detail & Related papers (2023-08-18T05:03:48Z) - DIRE for Diffusion-Generated Image Detection [128.95822613047298]
We propose a novel representation called DIffusion Reconstruction Error (DIRE)
DIRE measures the error between an input image and its reconstruction counterpart by a pre-trained diffusion model.
It provides a hint that DIRE can serve as a bridge to distinguish generated and real images.
arXiv Detail & Related papers (2023-03-16T13:15:03Z) - Diffusion Models Generate Images Like Painters: an Analytical Theory of Outline First, Details Later [1.8416014644193066]
We observe that the reverse diffusion process that underlies image generation has the following properties.
Individual trajectories tend to be low-dimensional and resemble 2D rotations'
We find that this solution accurately describes the initial phase of image generation for pretrained models.
arXiv Detail & Related papers (2023-03-04T20:08:57Z) - ADIR: Adaptive Diffusion for Image Reconstruction [46.838084286784195]
We propose a conditional sampling scheme that exploits the prior learned by diffusion models.
We then combine it with a novel approach for adapting pretrained diffusion denoising networks to their input.
We show that our proposed adaptive diffusion for image reconstruction' approach achieves a significant improvement in the super-resolution, deblurring, and text-based editing tasks.
arXiv Detail & Related papers (2022-12-06T18:39:58Z) - Unifying Diffusion Models' Latent Space, with Applications to
CycleDiffusion and Guidance [95.12230117950232]
We show that a common latent space emerges from two diffusion models trained independently on related domains.
Applying CycleDiffusion to text-to-image diffusion models, we show that large-scale text-to-image diffusion models can be used as zero-shot image-to-image editors.
arXiv Detail & Related papers (2022-10-11T15:53:52Z) - On Distillation of Guided Diffusion Models [94.95228078141626]
We propose an approach to distilling classifier-free guided diffusion models into models that are fast to sample from.
For standard diffusion models trained on the pixelspace, our approach is able to generate images visually comparable to that of the original model.
For diffusion models trained on the latent-space (e.g., Stable Diffusion), our approach is able to generate high-fidelity images using as few as 1 to 4 denoising steps.
arXiv Detail & Related papers (2022-10-06T18:03:56Z) - Diffusion Models in Vision: A Survey [80.82832715884597]
A diffusion model is a deep generative model that is based on two stages, a forward diffusion stage and a reverse diffusion stage.
Diffusion models are widely appreciated for the quality and diversity of the generated samples, despite their known computational burdens.
arXiv Detail & Related papers (2022-09-10T22:00:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.