Bringing Old Photos Back to Life
- URL: http://arxiv.org/abs/2004.09484v1
- Date: Mon, 20 Apr 2020 17:59:23 GMT
- Title: Bringing Old Photos Back to Life
- Authors: Ziyu Wan and Bo Zhang and Dongdong Chen and Pan Zhang and Dong Chen
and Jing Liao and Fang Wen
- Abstract summary: The degradation in real photos is complex and the domain gap between synthetic images and real old photos makes the network fail to generalize.
We propose a novel triplet domain translation network by leveraging real photos along with massive synthetic image pairs.
The proposed method outperforms state-of-the-art methods in terms of visual quality for old photos restoration.
- Score: 46.73615925108932
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose to restore old photos that suffer from severe degradation through
a deep learning approach. Unlike conventional restoration tasks that can be
solved through supervised learning, the degradation in real photos is complex
and the domain gap between synthetic images and real old photos makes the
network fail to generalize. Therefore, we propose a novel triplet domain
translation network by leveraging real photos along with massive synthetic
image pairs. Specifically, we train two variational autoencoders (VAEs) to
respectively transform old photos and clean photos into two latent spaces. And
the translation between these two latent spaces is learned with synthetic
paired data. This translation generalizes well to real photos because the
domain gap is closed in the compact latent space. Besides, to address multiple
degradations mixed in one old photo, we design a global branch with a partial
nonlocal block targeting to the structured defects, such as scratches and dust
spots, and a local branch targeting to the unstructured defects, such as noises
and blurriness. Two branches are fused in the latent space, leading to improved
capability to restore old photos from multiple defects. The proposed method
outperforms state-of-the-art methods in terms of visual quality for old photos
restoration.
Related papers
- Decoupling Degradation and Content Processing for Adverse Weather Image
Restoration [79.59228846484415]
Adverse weather image restoration strives to recover clear images from those affected by various weather types, such as rain, haze, and snow.
Previous techniques can handle multiple weather types within a single network, but they neglect the crucial distinction between these two processes, limiting the quality of restored images.
This work introduces a novel adverse weather image restoration method, called DDCNet, which decouples the degradation removal and content reconstruction process at the feature level based on their channel statistics.
arXiv Detail & Related papers (2023-12-08T12:26:38Z) - All-in-one Multi-degradation Image Restoration Network via Hierarchical
Degradation Representation [47.00239809958627]
We propose a novel All-in-one Multi-degradation Image Restoration Network (AMIRNet)
AMIRNet learns a degradation representation for unknown degraded images by progressively constructing a tree structure through clustering.
This tree-structured representation explicitly reflects the consistency and discrepancy of various distortions, providing a specific clue for image restoration.
arXiv Detail & Related papers (2023-08-06T04:51:41Z) - Pik-Fix: Restoring and Colorizing Old Photo [24.366910102387344]
Restoring and inpainting the visual memories that are present, but often impaired, in old photos remains an intriguing but unsolved research topic.
Deep learning presents a plausible avenue, but the lack of large-scale datasets of old photos makes addressing this restoration task very challenging.
Here we present a novel reference-based end-to-end learning framework that is able to both repair and colorize old and degraded pictures.
arXiv Detail & Related papers (2022-05-04T05:46:43Z) - ROMNet: Renovate the Old Memories [25.41639794384076]
We present a novel reference-based end-to-end learning framework that can jointly repair and colorize degraded legacy pictures.
We also create, to our knowledge, the first public and real-world old photo dataset with paired ground truth for evaluating old photo restoration models.
arXiv Detail & Related papers (2022-02-05T17:48:15Z) - The Spatially-Correlative Loss for Various Image Translation Tasks [69.62228639870114]
We propose a novel spatially-correlative loss that is simple, efficient and yet effective for preserving scene structure consistency.
Previous methods attempt this by using pixel-level cycle-consistency or feature-level matching losses.
We show distinct improvement over baseline models in all three modes of unpaired I2I translation: single-modal, multi-modal, and even single-image translation.
arXiv Detail & Related papers (2021-04-02T02:13:30Z) - Old Photo Restoration via Deep Latent Space Translation [46.73615925108932]
We propose to restore old photos that suffer from severe degradation through a deep learning approach.
The degradation in real photos is complex and the domain gap between synthetic images and real old photos makes the network fail to generalize.
Specifically, we train two variational autoencoders (VAEs) to respectively transform old photos and clean photos into two latent spaces.
And the translation between these two latent spaces is learned with synthetic paired data.
arXiv Detail & Related papers (2020-09-14T08:51:53Z) - Domain Adaptation for Image Dehazing [72.15994735131835]
Most existing methods train a dehazing model on synthetic hazy images, which are less able to generalize well to real hazy images due to domain shift.
We propose a domain adaptation paradigm, which consists of an image translation module and two image dehazing modules.
Experimental results on both synthetic and real-world images demonstrate that our model performs favorably against the state-of-the-art dehazing algorithms.
arXiv Detail & Related papers (2020-05-10T13:54:56Z) - Image Fine-grained Inpainting [89.17316318927621]
We present a one-stage model that utilizes dense combinations of dilated convolutions to obtain larger and more effective receptive fields.
To better train this efficient generator, except for frequently-used VGG feature matching loss, we design a novel self-guided regression loss.
We also employ a discriminator with local and global branches to ensure local-global contents consistency.
arXiv Detail & Related papers (2020-02-07T03:45:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.