LMR: A Large-Scale Multi-Reference Dataset for Reference-based
Super-Resolution
- URL: http://arxiv.org/abs/2303.04970v1
- Date: Thu, 9 Mar 2023 01:07:06 GMT
- Title: LMR: A Large-Scale Multi-Reference Dataset for Reference-based
Super-Resolution
- Authors: Lin Zhang, Xin Li, Dongliang He, Errui Ding, Zhaoxiang Zhang
- Abstract summary: It is widely agreed that reference-based super-resolution (RefSR) achieves superior results by referring to similar high quality images, compared to single image super-resolution (SISR)
Previous RefSR methods have all focused on single-reference image training, while multiple reference images are often available in testing or practical applications.
We construct a large-scale, multi-reference super-resolution dataset, named LMR. It contains 112,142 groups of 300x300 training images, which is 10x of the existing largest RefSR dataset.
- Score: 86.81241084950524
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: It is widely agreed that reference-based super-resolution (RefSR) achieves
superior results by referring to similar high quality images, compared to
single image super-resolution (SISR). Intuitively, the more references, the
better performance. However, previous RefSR methods have all focused on
single-reference image training, while multiple reference images are often
available in testing or practical applications. The root cause of such
training-testing mismatch is the absence of publicly available multi-reference
SR training datasets, which greatly hinders research efforts on multi-reference
super-resolution. To this end, we construct a large-scale, multi-reference
super-resolution dataset, named LMR. It contains 112,142 groups of 300x300
training images, which is 10x of the existing largest RefSR dataset. The image
size is also much larger. More importantly, each group is equipped with 5
reference images with different similarity levels. Furthermore, we propose a
new baseline method for multi-reference super-resolution: MRefSR, including a
Multi-Reference Attention Module (MAM) for feature fusion of an arbitrary
number of reference images, and a Spatial Aware Filtering Module (SAFM) for the
fused feature selection. The proposed MRefSR achieves significant improvements
over state-of-the-art approaches on both quantitative and qualitative
evaluations. Our code and data would be made available soon.
Related papers
- A New Dataset and Framework for Real-World Blurred Images Super-Resolution [9.122275433854062]
We develop a new super-resolution dataset specifically tailored for blur images, named the Real-world Blur-kept Super-Resolution (ReBlurSR) dataset.
We propose Perceptual-Blur-adaptive Super-Resolution (PBaSR), which comprises two main modules: the Cross Disentanglement Module (CDM) and the Cross Fusion Module (CFM)
By integrating these two modules, PBaSR achieves commendable performance on both general and blur data without any additional inference and deployment cost.
arXiv Detail & Related papers (2024-07-20T14:07:03Z) - Multi-Reference Image Super-Resolution: A Posterior Fusion Approach [0.3867363075280544]
This paper proposes a 2-step-weighting posterior fusion approach to combine the outputs of RefSR models with multiple references.
Experiments on the CUFED5 dataset demonstrate that the proposed methods can be applied to various state-of-the-art RefSR models to get a consistent improvement in image quality.
arXiv Detail & Related papers (2022-12-20T04:15:03Z) - RRSR:Reciprocal Reference-based Image Super-Resolution with Progressive
Feature Alignment and Selection [66.08293086254851]
We propose a reciprocal learning framework to reinforce the learning of a RefSR network.
The newly proposed module aligns reference-input images at multi-scale feature spaces and performs reference-aware feature selection.
We empirically show that multiple recent state-of-the-art RefSR models can be consistently improved with our reciprocal learning paradigm.
arXiv Detail & Related papers (2022-11-08T12:39:35Z) - Reference-based Image Super-Resolution with Deformable Attention
Transformer [62.71769634254654]
RefSR aims to exploit auxiliary reference (Ref) images to super-resolve low-resolution (LR) images.
This paper proposes a deformable attention Transformer, namely DATSR, with multiple scales.
Experiments demonstrate that our DATSR achieves state-of-the-art performance on benchmark datasets.
arXiv Detail & Related papers (2022-07-25T07:07:00Z) - Geometry-Aware Reference Synthesis for Multi-View Image Super-Resolution [16.68091352547819]
Multi-View Image Super-Resolution (MVISR) task aims to increase the resolution of multi-view images captured from the same scene.
One solution is to apply image or video super-resolution (SR) methods to reconstruct HR results from the low-resolution (LR) input view.
We propose the MVSRnet, which uses geometry information to extract sharp details from all LR multi-view to support the SR of the LR input view.
arXiv Detail & Related papers (2022-07-18T13:46:47Z) - Attention-based Multi-Reference Learning for Image Super-Resolution [29.361342747786164]
This paper proposes a novel Attention-based Multi-Reference Super-resolution network.
It learns to adaptively transfer the most similar texture from multiple reference images to the super-resolution output.
It achieves significantly improved performance over state-of-the-art reference super-resolution approaches.
arXiv Detail & Related papers (2021-08-31T09:12:26Z) - Variational AutoEncoder for Reference based Image Super-Resolution [27.459299640768773]
We propose a reference based image super-resolution, for which any arbitrary image can act as a reference for super-resolution.
Even using random map or low-resolution image itself, the proposed RefVAE can transfer the knowledge from the reference to the super-resolved images.
arXiv Detail & Related papers (2021-06-08T04:12:38Z) - MASA-SR: Matching Acceleration and Spatial Adaptation for
Reference-Based Image Super-Resolution [74.24676600271253]
We propose the MASA network for RefSR, where two novel modules are designed to address these problems.
The proposed Match & Extraction Module significantly reduces the computational cost by a coarse-to-fine correspondence matching scheme.
The Spatial Adaptation Module learns the difference of distribution between the LR and Ref images, and remaps the distribution of Ref features to that of LR features in a spatially adaptive way.
arXiv Detail & Related papers (2021-06-04T07:15:32Z) - Deep Burst Super-Resolution [165.90445859851448]
We propose a novel architecture for the burst super-resolution task.
Our network takes multiple noisy RAW images as input, and generates a denoised, super-resolved RGB image as output.
In order to enable training and evaluation on real-world data, we additionally introduce the BurstSR dataset.
arXiv Detail & Related papers (2021-01-26T18:57:21Z) - DDet: Dual-path Dynamic Enhancement Network for Real-World Image
Super-Resolution [69.2432352477966]
Real image super-resolution(Real-SR) focus on the relationship between real-world high-resolution(HR) and low-resolution(LR) image.
In this article, we propose a Dual-path Dynamic Enhancement Network(DDet) for Real-SR.
Unlike conventional methods which stack up massive convolutional blocks for feature representation, we introduce a content-aware framework to study non-inherently aligned image pair.
arXiv Detail & Related papers (2020-02-25T18:24:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.