MetaF2N: Blind Image Super-Resolution by Learning Efficient Model
Adaptation from Faces
- URL: http://arxiv.org/abs/2309.08113v1
- Date: Fri, 15 Sep 2023 02:45:21 GMT
- Title: MetaF2N: Blind Image Super-Resolution by Learning Efficient Model
Adaptation from Faces
- Authors: Zhicun Yin, Ming Liu, Xiaoming Li, Hui Yang, Longan Xiao, Wangmeng Zuo
- Abstract summary: We propose a method dubbed MetaF2N to fine-tune model parameters for adapting to the whole Natural image in a Meta-learning framework.
Considering the gaps between the recovered faces and ground-truths, we deploy a MaskNet for adaptively predicting loss weights at different positions to reduce the impact of low-confidence areas.
- Score: 51.42949911178461
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Due to their highly structured characteristics, faces are easier to recover
than natural scenes for blind image super-resolution. Therefore, we can extract
the degradation representation of an image from the low-quality and recovered
face pairs. Using the degradation representation, realistic low-quality images
can then be synthesized to fine-tune the super-resolution model for the
real-world low-quality image. However, such a procedure is time-consuming and
laborious, and the gaps between recovered faces and the ground-truths further
increase the optimization uncertainty. To facilitate efficient model adaptation
towards image-specific degradations, we propose a method dubbed MetaF2N, which
leverages the contained Faces to fine-tune model parameters for adapting to the
whole Natural image in a Meta-learning framework. The degradation extraction
and low-quality image synthesis steps are thus circumvented in our MetaF2N, and
it requires only one fine-tuning step to get decent performance. Considering
the gaps between the recovered faces and ground-truths, we further deploy a
MaskNet for adaptively predicting loss weights at different positions to reduce
the impact of low-confidence areas. To evaluate our proposed MetaF2N, we have
collected a real-world low-quality dataset with one or multiple faces in each
image, and our MetaF2N achieves superior performance on both synthetic and
real-world datasets. Source code, pre-trained models, and collected datasets
are available at https://github.com/yinzhicun/MetaF2N.
Related papers
- Lightweight single-image super-resolution network based on dual paths [0.552480439325792]
Single image super-resolution(SISR) algorithms under deep learning currently have two main models, one based on convolutional neural networks and the other based on Transformer.
This paper proposes a new lightweight multi-scale feature fusion network model based on two-way complementary convolutional and Transformer.
arXiv Detail & Related papers (2024-09-10T15:31:37Z) - GAURA: Generalizable Approach for Unified Restoration and Rendering of Arbitrary Views [28.47730275628715]
We propose a generalizable neural rendering method that can perform high-fidelity novel view synthesis under several degradations.
Our method, GAURA, is learning-based and does not require any test-time scene-specific optimization.
arXiv Detail & Related papers (2024-07-11T06:44:37Z) - DGNet: Dynamic Gradient-Guided Network for Water-Related Optics Image
Enhancement [77.0360085530701]
Underwater image enhancement (UIE) is a challenging task due to the complex degradation caused by underwater environments.
Previous methods often idealize the degradation process, and neglect the impact of medium noise and object motion on the distribution of image features.
Our approach utilizes predicted images to dynamically update pseudo-labels, adding a dynamic gradient to optimize the network's gradient space.
arXiv Detail & Related papers (2023-12-12T06:07:21Z) - Improving Pixel-based MIM by Reducing Wasted Modeling Capability [77.99468514275185]
We propose a new method that explicitly utilizes low-level features from shallow layers to aid pixel reconstruction.
To the best of our knowledge, we are the first to systematically investigate multi-level feature fusion for isotropic architectures.
Our method yields significant performance gains, such as 1.2% on fine-tuning, 2.8% on linear probing, and 2.6% on semantic segmentation.
arXiv Detail & Related papers (2023-08-01T03:44:56Z) - From Face to Natural Image: Learning Real Degradation for Blind Image
Super-Resolution [72.68156760273578]
We design training pairs for super-resolving the real-world low-quality (LQ) images.
We take paired HQ and LQ face images as inputs to explicitly predict degradation-aware and content-independent representations.
We then transfer these real degradation representations from face to natural images to synthesize the degraded LQ natural images.
arXiv Detail & Related papers (2022-10-03T08:09:21Z) - AdaFace: Quality Adaptive Margin for Face Recognition [56.99208144386127]
We introduce another aspect of adaptiveness in the loss function, namely the image quality.
We propose a new loss function that emphasizes samples of different difficulties based on their image quality.
Our method, AdaFace, improves the face recognition performance over the state-of-the-art (SoTA) on four datasets.
arXiv Detail & Related papers (2022-04-03T01:23:41Z) - SelFSR: Self-Conditioned Face Super-Resolution in the Wild via Flow
Field Degradation Network [12.976199676093442]
We propose a novel domain-adaptive degradation network for face super-resolution in the wild.
Our model achieves state-of-the-art performance on both CelebA and real-world face dataset.
arXiv Detail & Related papers (2021-12-20T17:04:00Z) - Efficient texture-aware multi-GAN for image inpainting [5.33024001730262]
Recent GAN-based (Generative adversarial networks) inpainting methods show remarkable improvements.
We propose a multi-GAN architecture improving both the performance and rendering efficiency.
arXiv Detail & Related papers (2020-09-30T14:58:03Z) - Perceptually Optimizing Deep Image Compression [53.705543593594285]
Mean squared error (MSE) and $ell_p$ norms have largely dominated the measurement of loss in neural networks.
We propose a different proxy approach to optimize image analysis networks against quantitative perceptual models.
arXiv Detail & Related papers (2020-07-03T14:33:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.