Multi-Reference Image Super-Resolution: A Posterior Fusion Approach
- URL: http://arxiv.org/abs/2212.09988v1
- Date: Tue, 20 Dec 2022 04:15:03 GMT
- Title: Multi-Reference Image Super-Resolution: A Posterior Fusion Approach
- Authors: Ke Zhao, Haining Tan, Tsz Fung Yau
- Abstract summary: This paper proposes a 2-step-weighting posterior fusion approach to combine the outputs of RefSR models with multiple references.
Experiments on the CUFED5 dataset demonstrate that the proposed methods can be applied to various state-of-the-art RefSR models to get a consistent improvement in image quality.
- Score: 0.3867363075280544
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Reference-based Super-resolution (RefSR) approaches have recently been
proposed to overcome the ill-posed problem of image super-resolution by
providing additional information from a high-resolution image. Multi-reference
super-resolution extends this approach by allowing more information to be
incorporated. This paper proposes a 2-step-weighting posterior fusion approach
to combine the outputs of RefSR models with multiple references. Extensive
experiments on the CUFED5 dataset demonstrate that the proposed methods can be
applied to various state-of-the-art RefSR models to get a consistent
improvement in image quality.
Related papers
- AccDiffusion: An Accurate Method for Higher-Resolution Image Generation [63.53163540340026]
We propose AccDiffusion, an accurate method for patch-wise higher-resolution image generation without training.
An in-depth analysis in this paper reveals an identical text prompt for different patches causes repeated object generation.
Our AccDiffusion, for the first time, proposes to decouple the vanilla image-content-aware prompt into a set of patch-content-aware prompts.
arXiv Detail & Related papers (2024-07-15T14:06:29Z) - Detail-Enhancing Framework for Reference-Based Image Super-Resolution [8.899312174844725]
We propose a Detail-Enhancing Framework (DEF) for reference-based super-resolution.
Our proposed method achieves superior visual results while maintaining comparable numerical outcomes.
arXiv Detail & Related papers (2024-05-01T10:27:22Z) - LMR: A Large-Scale Multi-Reference Dataset for Reference-based
Super-Resolution [86.81241084950524]
It is widely agreed that reference-based super-resolution (RefSR) achieves superior results by referring to similar high quality images, compared to single image super-resolution (SISR)
Previous RefSR methods have all focused on single-reference image training, while multiple reference images are often available in testing or practical applications.
We construct a large-scale, multi-reference super-resolution dataset, named LMR. It contains 112,142 groups of 300x300 training images, which is 10x of the existing largest RefSR dataset.
arXiv Detail & Related papers (2023-03-09T01:07:06Z) - RRSR:Reciprocal Reference-based Image Super-Resolution with Progressive
Feature Alignment and Selection [66.08293086254851]
We propose a reciprocal learning framework to reinforce the learning of a RefSR network.
The newly proposed module aligns reference-input images at multi-scale feature spaces and performs reference-aware feature selection.
We empirically show that multiple recent state-of-the-art RefSR models can be consistently improved with our reciprocal learning paradigm.
arXiv Detail & Related papers (2022-11-08T12:39:35Z) - Reference-based Image Super-Resolution with Deformable Attention
Transformer [62.71769634254654]
RefSR aims to exploit auxiliary reference (Ref) images to super-resolve low-resolution (LR) images.
This paper proposes a deformable attention Transformer, namely DATSR, with multiple scales.
Experiments demonstrate that our DATSR achieves state-of-the-art performance on benchmark datasets.
arXiv Detail & Related papers (2022-07-25T07:07:00Z) - Learning Resolution-Adaptive Representations for Cross-Resolution Person
Re-Identification [49.57112924976762]
Cross-resolution person re-identification problem aims to match low-resolution (LR) query identity images against high resolution (HR) gallery images.
It is a challenging and practical problem since the query images often suffer from resolution degradation due to the different capturing conditions from real-world cameras.
This paper explores an alternative SR-free paradigm to directly compare HR and LR images via a dynamic metric, which is adaptive to the resolution of a query image.
arXiv Detail & Related papers (2022-07-09T03:49:51Z) - Attention-based Multi-Reference Learning for Image Super-Resolution [29.361342747786164]
This paper proposes a novel Attention-based Multi-Reference Super-resolution network.
It learns to adaptively transfer the most similar texture from multiple reference images to the super-resolution output.
It achieves significantly improved performance over state-of-the-art reference super-resolution approaches.
arXiv Detail & Related papers (2021-08-31T09:12:26Z) - Variational AutoEncoder for Reference based Image Super-Resolution [27.459299640768773]
We propose a reference based image super-resolution, for which any arbitrary image can act as a reference for super-resolution.
Even using random map or low-resolution image itself, the proposed RefVAE can transfer the knowledge from the reference to the super-resolved images.
arXiv Detail & Related papers (2021-06-08T04:12:38Z) - MASA-SR: Matching Acceleration and Spatial Adaptation for
Reference-Based Image Super-Resolution [74.24676600271253]
We propose the MASA network for RefSR, where two novel modules are designed to address these problems.
The proposed Match & Extraction Module significantly reduces the computational cost by a coarse-to-fine correspondence matching scheme.
The Spatial Adaptation Module learns the difference of distribution between the LR and Ref images, and remaps the distribution of Ref features to that of LR features in a spatially adaptive way.
arXiv Detail & Related papers (2021-06-04T07:15:32Z) - Interpretable Deep Multimodal Image Super-Resolution [23.48305854574444]
Multimodal image super-resolution (SR) is the reconstruction of a high resolution image given a low-resolution observation with the aid of another image modality.
We present a multimodal deep network design that integrates coupled sparse priors and allows the effective fusion of information from another modality into the reconstruction process.
arXiv Detail & Related papers (2020-09-07T14:08:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.