Blind Motion Deblurring through SinGAN Architecture
- URL: http://arxiv.org/abs/2011.03705v1
- Date: Sat, 7 Nov 2020 06:09:16 GMT
- Title: Blind Motion Deblurring through SinGAN Architecture
- Authors: Harshil Jain, Rohit Patil, Indra Deep Mastan, and Shanmuganathan Raman
- Abstract summary: Blind motion deblurring involves reconstructing a sharp image from an observation that is blurry.
SinGAN is a generative model that is unconditional and could be learned from a single natural image.
- Score: 21.104218472462907
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Blind motion deblurring involves reconstructing a sharp image from an
observation that is blurry. It is a problem that is ill-posed and lies in the
categories of image restoration problems. The training data-based methods for
image deblurring mostly involve training models that take a lot of time. These
models are data-hungry i.e., they require a lot of training data to generate
satisfactory results. Recently, there are various image feature learning
methods developed which relieve us of the need for training data and perform
image restoration and image synthesis, e.g., DIP, InGAN, and SinGAN. SinGAN is
a generative model that is unconditional and could be learned from a single
natural image. This model primarily captures the internal distribution of the
patches which are present in the image and is capable of generating samples of
varied diversity while preserving the visual content of the image. Images
generated from the model are very much like real natural images. In this paper,
we focus on blind motion deblurring through SinGAN architecture.
Related papers
- Photo-Realistic Image Restoration in the Wild with Controlled Vision-Language Models [14.25759541950917]
This work leverages a capable vision-language model and a synthetic degradation pipeline to learn image restoration in the wild (wild IR)
Our base diffusion model is the image restoration SDE (IR-SDE)
arXiv Detail & Related papers (2024-04-15T12:34:21Z) - Unlocking Pre-trained Image Backbones for Semantic Image Synthesis [29.688029979801577]
We propose a new class of GAN discriminators for semantic image synthesis that generates highly realistic images.
Our model, which we dub DP-SIMS, achieves state-of-the-art results in terms of image quality and consistency with the input label maps on ADE-20K, COCO-Stuff, and Cityscapes.
arXiv Detail & Related papers (2023-12-20T09:39:19Z) - Person Image Synthesis via Denoising Diffusion Model [116.34633988927429]
We show how denoising diffusion models can be applied for high-fidelity person image synthesis.
Our results on two large-scale benchmarks and a user study demonstrate the photorealism of our proposed approach under challenging scenarios.
arXiv Detail & Related papers (2022-11-22T18:59:50Z) - A comparison of different atmospheric turbulence simulation methods for
image restoration [64.24948495708337]
Atmospheric turbulence deteriorates the quality of images captured by long-range imaging systems.
Various deep learning-based atmospheric turbulence mitigation methods have been proposed in the literature.
We systematically evaluate the effectiveness of various turbulence simulation methods on image restoration.
arXiv Detail & Related papers (2022-04-19T16:21:36Z) - InvGAN: Invertible GANs [88.58338626299837]
InvGAN, short for Invertible GAN, successfully embeds real images to the latent space of a high quality generative model.
This allows us to perform image inpainting, merging, and online data augmentation.
arXiv Detail & Related papers (2021-12-08T21:39:00Z) - Meta Internal Learning [88.68276505511922]
Internal learning for single-image generation is a framework, where a generator is trained to produce novel images based on a single image.
We propose a meta-learning approach that enables training over a collection of images, in order to model the internal statistics of the sample image more effectively.
Our results show that the models obtained are as suitable as single-image GANs for many common image applications.
arXiv Detail & Related papers (2021-10-06T16:27:38Z) - Image-to-image Transformation with Auxiliary Condition [0.0]
We propose to introduce the label information of subjects, e.g., pose and type of objects in the training of CycleGAN, and lead it to obtain label-wise transforamtion models.
We evaluate our proposed method called Label-CycleGAN, through experiments on the digit image transformation from SVHN to MNIST and the surveillance camera image transformation from simulated to real images.
arXiv Detail & Related papers (2021-06-25T15:33:11Z) - Ensembling with Deep Generative Views [72.70801582346344]
generative models can synthesize "views" of artificial images that mimic real-world variations, such as changes in color or pose.
Here, we investigate whether such views can be applied to real images to benefit downstream analysis tasks such as image classification.
We use StyleGAN2 as the source of generative augmentations and investigate this setup on classification tasks involving facial attributes, cat faces, and cars.
arXiv Detail & Related papers (2021-04-29T17:58:35Z) - Self-Supervised Linear Motion Deblurring [112.75317069916579]
Deep convolutional neural networks are state-of-the-art for image deblurring.
We present a differentiable reblur model for self-supervised motion deblurring.
Our experiments demonstrate that self-supervised single image deblurring is really feasible.
arXiv Detail & Related papers (2020-02-10T20:15:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.