Super-Resolving Cross-Domain Face Miniatures by Peeking at One-Shot
Exemplar
- URL: http://arxiv.org/abs/2103.08863v1
- Date: Tue, 16 Mar 2021 05:47:26 GMT
- Title: Super-Resolving Cross-Domain Face Miniatures by Peeking at One-Shot
Exemplar
- Authors: Peike Li, Xin Yu, Yi Yang
- Abstract summary: We develop a Domain-Aware Pyramid-based Face Super-Resolution network, named DAP-FSR network.
Our DAP-FSR is the first attempt to super-resolve LR faces from a target domain by exploiting only a pair of high-resolution (HR) and LR exemplars in the target domain.
By iteratively updating the latent representations and our decoder, our DAP-FSR will be adapted to the target domain.
- Score: 42.78574493628936
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Conventional face super-resolution methods usually assume testing
low-resolution (LR) images lie in the same domain as the training ones. Due to
different lighting conditions and imaging hardware, domain gaps between
training and testing images inevitably occur in many real-world scenarios.
Neglecting those domain gaps would lead to inferior face super-resolution (FSR)
performance. However, how to transfer a trained FSR model to a target domain
efficiently and effectively has not been investigated. To tackle this problem,
we develop a Domain-Aware Pyramid-based Face Super-Resolution network, named
DAP-FSR network. Our DAP-FSR is the first attempt to super-resolve LR faces
from a target domain by exploiting only a pair of high-resolution (HR) and LR
exemplar in the target domain. To be specific, our DAP-FSR firstly employs its
encoder to extract the multi-scale latent representations of the input LR face.
Considering only one target domain example is available, we propose to augment
the target domain data by mixing the latent representations of the target
domain face and source domain ones, and then feed the mixed representations to
the decoder of our DAP-FSR. The decoder will generate new face images
resembling the target domain image style. The generated HR faces in turn are
used to optimize our decoder to reduce the domain gap. By iteratively updating
the latent representations and our decoder, our DAP-FSR will be adapted to the
target domain, thus achieving authentic and high-quality upsampled HR faces.
Extensive experiments on three newly constructed benchmarks validate the
effectiveness and superior performance of our DAP-FSR compared to the
state-of-the-art.
Related papers
- Source-free Domain Adaptive Object Detection in Remote Sensing Images [11.19538606490404]
We propose a source-free object detection (SFOD) setting for RS images.
It aims to perform target domain adaptation using only the source pre-trained model.
Our method does not require access to source domain RS images.
arXiv Detail & Related papers (2024-01-31T15:32:44Z) - Enhancing Visual Domain Adaptation with Source Preparation [5.287588907230967]
Domain Adaptation techniques fail to consider the characteristics of the source domain itself.
We propose Source Preparation (SP), a method to mitigate source domain biases.
We show that SP enhances UDA across a range of visual domains, with improvements up to 40.64% in mIoU over baseline.
arXiv Detail & Related papers (2023-06-16T18:56:44Z) - I2F: A Unified Image-to-Feature Approach for Domain Adaptive Semantic
Segmentation [55.633859439375044]
Unsupervised domain adaptation (UDA) for semantic segmentation is a promising task freeing people from heavy annotation work.
Key idea to tackle this problem is to perform both image-level and feature-level adaptation jointly.
This paper proposes a novel UDA pipeline for semantic segmentation that unifies image-level and feature-level adaptation.
arXiv Detail & Related papers (2023-01-03T15:19:48Z) - Unsupervised Domain Adaptation for Semantic Segmentation using One-shot
Image-to-Image Translation via Latent Representation Mixing [9.118706387430883]
We propose a new unsupervised domain adaptation method for the semantic segmentation of very high resolution images.
An image-to-image translation paradigm is proposed, based on an encoder-decoder principle where latent content representations are mixed across domains.
Cross-city comparative experiments have shown that the proposed method outperforms state-of-the-art domain adaptation methods.
arXiv Detail & Related papers (2022-12-07T18:16:17Z) - Real-World Image Super Resolution via Unsupervised Bi-directional Cycle
Domain Transfer Learning based Generative Adversarial Network [14.898170534545727]
We propose the Unsupervised Bi-directional Cycle Domain Transfer Learning-based Generative Adrial Network (UBCDTLGAN)
First, the UBCDTN is able to produce an approximated real-like LR image through transferring the LR image from an artificially degraded domain to the real-world image domain.
Second, the SESRN has the ability to super-resolve the approximated real-like LR image to a photo-realistic HR image.
arXiv Detail & Related papers (2022-11-19T02:19:21Z) - Dual Adversarial Adaptation for Cross-Device Real-World Image
Super-Resolution [114.26933742226115]
Super-resolution (SR) models trained on images from different devices could exhibit distinct imaging patterns.
We propose an unsupervised domain adaptation mechanism for real-world SR, named Dual ADversarial Adaptation (DADA)
We empirically conduct experiments under six Real to Real adaptation settings among three different cameras, and achieve superior performance compared with existing state-of-the-art approaches.
arXiv Detail & Related papers (2022-05-07T02:55:39Z) - Memory-augmented Deep Unfolding Network for Guided Image
Super-resolution [67.83489239124557]
Guided image super-resolution (GISR) aims to obtain a high-resolution (HR) target image by enhancing the spatial resolution of a low-resolution (LR) target image under the guidance of a HR image.
Previous model-based methods mainly takes the entire image as a whole, and assume the prior distribution between the HR target image and the HR guidance image.
We propose a maximal a posterior (MAP) estimation model for GISR with two types of prior on the HR target image.
arXiv Detail & Related papers (2022-02-12T15:37:13Z) - Best-Buddy GANs for Highly Detailed Image Super-Resolution [71.13466303340192]
We consider the single image super-resolution (SISR) problem, where a high-resolution (HR) image is generated based on a low-resolution (LR) input.
Most methods along this line rely on a predefined single-LR-single-HR mapping, which is not flexible enough for the SISR task.
We propose best-buddy GANs (Beby-GAN) for rich-detail SISR. Relaxing the immutable one-to-one constraint, we allow the estimated patches to dynamically seek the best supervision.
arXiv Detail & Related papers (2021-03-29T02:58:27Z) - Pixel-Level Cycle Association: A New Perspective for Domain Adaptive
Semantic Segmentation [169.82760468633236]
We propose to build the pixel-level cycle association between source and target pixel pairs.
Our method can be trained end-to-end in one stage and introduces no additional parameters.
arXiv Detail & Related papers (2020-10-31T00:11:36Z) - Deep Cyclic Generative Adversarial Residual Convolutional Networks for
Real Image Super-Resolution [20.537597542144916]
We consider a deep cyclic network structure to maintain the domain consistency between the LR and HR data distributions.
We propose the Super-Resolution Residual Cyclic Generative Adversarial Network (SRResCycGAN) by training with a generative adversarial network (GAN) framework for the LR to HR domain translation.
arXiv Detail & Related papers (2020-09-07T11:11:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.