Simulating analogue film damage to analyse and improve artefact
restoration on high-resolution scans
- URL: http://arxiv.org/abs/2302.10004v1
- Date: Mon, 20 Feb 2023 14:24:18 GMT
- Title: Simulating analogue film damage to analyse and improve artefact
restoration on high-resolution scans
- Authors: Daniela Ivanova, John Williamson, Paul Henderson
- Abstract summary: Digital scans of analogue photographic film typically contain artefacts such as dust and scratches.
Deep learning models have shown impressive results in general image inpainting and denoising, but film artefact removal is an understudied problem.
There are no publicly available high-quality datasets of real-world analogue film damage for training and evaluation.
We collect a dataset of 4K damaged analogue film scans paired with manually-restored versions produced by a human expert.
We construct a larger synthetic dataset of damaged images with paired clean versions using a statistical model of artefact shape and occurrence learnt from real, heavily-damaged images.
- Score: 10.871587311621974
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Digital scans of analogue photographic film typically contain artefacts such
as dust and scratches. Automated removal of these is an important part of
preservation and dissemination of photographs of historical and cultural
importance.
While state-of-the-art deep learning models have shown impressive results in
general image inpainting and denoising, film artefact removal is an
understudied problem. It has particularly challenging requirements, due to the
complex nature of analogue damage, the high resolution of film scans, and
potential ambiguities in the restoration. There are no publicly available
high-quality datasets of real-world analogue film damage for training and
evaluation, making quantitative studies impossible.
We address the lack of ground-truth data for evaluation by collecting a
dataset of 4K damaged analogue film scans paired with manually-restored
versions produced by a human expert, allowing quantitative evaluation of
restoration performance. We construct a larger synthetic dataset of damaged
images with paired clean versions using a statistical model of artefact shape
and occurrence learnt from real, heavily-damaged images. We carefully validate
the realism of the simulated damage via a human perceptual study, showing that
even expert users find our synthetic damage indistinguishable from real. In
addition, we demonstrate that training with our synthetically damaged dataset
leads to improved artefact segmentation performance when compared to previously
proposed synthetic analogue damage.
Finally, we use these datasets to train and analyse the performance of eight
state-of-the-art image restoration methods on high-resolution scans. We compare
both methods which directly perform the restoration task on scans with
artefacts, and methods which require a damage mask to be provided for the
inpainting of artefacts.
Related papers
- Utilizing Multi-step Loss for Single Image Reflection Removal [0.9208007322096532]
Distorted images can negatively impact tasks like object detection and image segmentation.
We present a novel approach for image reflection removal using a single image.
arXiv Detail & Related papers (2024-12-11T17:57:25Z) - FoundIR: Unleashing Million-scale Training Data to Advance Foundation Models for Image Restoration [66.61201445650323]
Existing methods suffer from a generalization bottleneck in real-world scenarios.
We contribute a million-scale dataset with two notable advantages over existing training data.
We propose a robust model, FoundIR, to better address a broader range of restoration tasks in real-world scenarios.
arXiv Detail & Related papers (2024-12-02T12:08:40Z) - ArtiFade: Learning to Generate High-quality Subject from Blemished Images [10.112125529627157]
ArtiFade exploits fine-tuning of a pre-trained text-to-image model, aiming to remove artifacts.
ArtiFade also ensures the preservation of the original generative capabilities inherent within the diffusion model.
arXiv Detail & Related papers (2024-09-05T17:57:59Z) - When Synthetic Traces Hide Real Content: Analysis of Stable Diffusion Image Laundering [18.039034362749504]
In recent years, methods for producing highly realistic synthetic images have significantly advanced.
It is possible to pass an image through SD autoencoders to reproduce a synthetic copy of the image with high realism and almost no visual artifacts.
This process, known as SD image laundering, can transform real images into lookalike synthetic ones and risks complicating forensic analysis for content authenticity verification.
arXiv Detail & Related papers (2024-07-15T14:01:35Z) - Perceptual Artifacts Localization for Image Synthesis Tasks [59.638307505334076]
We introduce a novel dataset comprising 10,168 generated images, each annotated with per-pixel perceptual artifact labels.
A segmentation model, trained on our proposed dataset, effectively localizes artifacts across a range of tasks.
We propose an innovative zoom-in inpainting pipeline that seamlessly rectifies perceptual artifacts in the generated images.
arXiv Detail & Related papers (2023-10-09T10:22:08Z) - DR2: Diffusion-based Robust Degradation Remover for Blind Face
Restoration [66.01846902242355]
Blind face restoration usually synthesizes degraded low-quality data with a pre-defined degradation model for training.
It is expensive and infeasible to include every type of degradation to cover real-world cases in the training data.
We propose Robust Degradation Remover (DR2) to first transform the degraded image to a coarse but degradation-invariant prediction, then employ an enhancement module to restore the coarse prediction to a high-quality image.
arXiv Detail & Related papers (2023-03-13T06:05:18Z) - TexPose: Neural Texture Learning for Self-Supervised 6D Object Pose
Estimation [55.94900327396771]
We introduce neural texture learning for 6D object pose estimation from synthetic data.
We learn to predict realistic texture of objects from real image collections.
We learn pose estimation from pixel-perfect synthetic data.
arXiv Detail & Related papers (2022-12-25T13:36:32Z) - A comparison of different atmospheric turbulence simulation methods for
image restoration [64.24948495708337]
Atmospheric turbulence deteriorates the quality of images captured by long-range imaging systems.
Various deep learning-based atmospheric turbulence mitigation methods have been proposed in the literature.
We systematically evaluate the effectiveness of various turbulence simulation methods on image restoration.
arXiv Detail & Related papers (2022-04-19T16:21:36Z) - Learning MRI Artifact Removal With Unpaired Data [74.48301038665929]
Retrospective artifact correction (RAC) improves image quality post acquisition and enhances image usability.
Recent machine learning driven techniques for RAC are predominantly based on supervised learning.
Here we show that unwanted image artifacts can be disentangled and removed from an image via an RAC neural network learned with unpaired data.
arXiv Detail & Related papers (2021-10-09T16:09:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.