A Generative Adversarial Approach with Residual Learning for Dust and
Scratches Artifacts Removal
- URL: http://arxiv.org/abs/2009.10663v1
- Date: Tue, 22 Sep 2020 16:32:57 GMT
- Title: A Generative Adversarial Approach with Residual Learning for Dust and
Scratches Artifacts Removal
- Authors: Ionu\c{t} Mironic\u{a}
- Abstract summary: We present a GAN based method that is able to remove dust and scratches errors from film scans.
residual learning is utilized to speed up the training process, as well as boost the denoising performance.
We significantly outperform the state-of-the-art methods and software applications, providing superior results.
- Score: 0.0
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Retouching can significantly elevate the visual appeal of photos, but many
casual photographers lack the expertise to operate in a professional manner.
One particularly challenging task for old photo retouching remains the removal
of dust and scratches artifacts. Traditionally, this task has been completed
manually with special image enhancement software and represents a tedious task
that requires special know-how of photo editing applications.
However, recent research utilizing Generative Adversarial Networks (GANs) has
been proven to obtain good results in various automated image enhancement tasks
compared to traditional methods. This motivated us to explore the use of GANs
in the context of film photo editing. In this paper, we present a GAN based
method that is able to remove dust and scratches errors from film scans.
Specifically, residual learning is utilized to speed up the training process,
as well as boost the denoising performance.
An extensive evaluation of our model on a community provided dataset shows
that it generalizes remarkably well, not being dependent on any particular type
of image. Finally, we significantly outperform the state-of-the-art methods and
software applications, providing superior results.
Related papers
- INRetouch: Context Aware Implicit Neural Representation for Photography Retouching [54.17599183365242]
We propose a novel retouch transfer approach that learns from professional edits through before-after image pairs.
We develop a context-aware Implicit Neural Representation that learns to apply edits adaptively based on image content and context.
Our approach not only surpasses existing methods in photo retouching but also enhances performance in related image reconstruction tasks.
arXiv Detail & Related papers (2024-12-05T03:31:48Z) - PromptFix: You Prompt and We Fix the Photo [84.69812824355269]
Diffusion models equipped with language models demonstrate excellent controllability in image generation tasks.
The lack of diverse instruction-following data hampers the development of models.
We propose PromptFix, a framework that enables diffusion models to follow human instructions.
arXiv Detail & Related papers (2024-05-27T03:13:28Z) - Learning Subject-Aware Cropping by Outpainting Professional Photos [69.0772948657867]
We propose a weakly-supervised approach to learn what makes a high-quality subject-aware crop from professional stock images.
Our insight is to combine a library of stock images with a modern, pre-trained text-to-image diffusion model.
We are able to automatically generate a large dataset of cropped-uncropped training pairs to train a cropping model.
arXiv Detail & Related papers (2023-12-19T11:57:54Z) - DiffuseRAW: End-to-End Generative RAW Image Processing for Low-Light Images [5.439020425819001]
We develop a new generative ISP that relies on fine-tuning latent diffusion models on RAW images.
We evaluate our approach on popular end-to-end low-light datasets for which we see promising results.
arXiv Detail & Related papers (2023-12-13T03:39:05Z) - Empowering Visually Impaired Individuals: A Novel Use of Apple Live
Photos and Android Motion Photos [3.66237529322911]
We advocate for the use of Apple Live Photos and Android Motion Photos technologies.
Our findings reveal that both Live Photos and Motion Photos outperform single-frame images in common visual assisting tasks.
arXiv Detail & Related papers (2023-09-14T20:46:35Z) - Take a Prior from Other Tasks for Severe Blur Removal [52.380201909782684]
Cross-level feature learning strategy based on knowledge distillation to learn the priors.
Semantic prior embedding layer with multi-level aggregation and semantic attention transformation to integrate the priors effectively.
Experiments on natural image deblurring benchmarks and real-world images, such as GoPro and RealBlur datasets, demonstrate our method's effectiveness and ability.
arXiv Detail & Related papers (2023-02-14T08:30:51Z) - Perceptual Image Enhancement for Smartphone Real-Time Applications [60.45737626529091]
We propose LPIENet, a lightweight network for perceptual image enhancement.
Our model can deal with noise artifacts, diffraction artifacts, blur, and HDR overexposure.
Our model can process 2K resolution images under 1 second in mid-level commercial smartphones.
arXiv Detail & Related papers (2022-10-24T19:16:33Z) - Image Inpainting using Partial Convolution [0.3441021278275805]
The aim of this paper is to perform image inpainting using robust deep learning methods that use partial convolution layers.
In various practical applications, images are often deteriorated by noise due to the presence of corrupted, lost, or undesirable information.
arXiv Detail & Related papers (2021-08-19T17:01:27Z) - Self-Adaptively Learning to Demoire from Focused and Defocused Image
Pairs [97.67638106818613]
Moire artifacts are common in digital photography, resulting from the interference between high-frequency scene content and the color filter array of the camera.
Existing deep learning-based demoireing methods trained on large scale iteration are limited in handling various complex moire patterns.
We propose a self-adaptive learning method for demoireing a high-frequency image, with the help of an additional defocused moire-free blur image.
arXiv Detail & Related papers (2020-11-03T23:09:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.