ImagePairs: Realistic Super Resolution Dataset via Beam Splitter Camera
Rig
- URL: http://arxiv.org/abs/2004.08513v1
- Date: Sat, 18 Apr 2020 03:06:41 GMT
- Title: ImagePairs: Realistic Super Resolution Dataset via Beam Splitter Camera
Rig
- Authors: Hamid Reza Vaezi Joze, Ilya Zharkov, Karlton Powell, Carl Ringler,
Luming Liang, Andy Roulston, Moshe Lutz, Vivek Pradeep
- Abstract summary: We propose a new data acquisition technique for gathering real image data set.
We use a beam-splitter to capture the same scene by a low resolution camera and a high resolution camera.
Unlike current small-scale dataset used for these tasks, our proposed dataset includes 11,421 pairs of low-resolution high-resolution images.
- Score: 13.925480922578869
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Super Resolution is the problem of recovering a high-resolution image from a
single or multiple low-resolution images of the same scene. It is an ill-posed
problem since high frequency visual details of the scene are completely lost in
low-resolution images. To overcome this, many machine learning approaches have
been proposed aiming at training a model to recover the lost details in the new
scenes. Such approaches include the recent successful effort in utilizing deep
learning techniques to solve super resolution problem. As proven, data itself
plays a significant role in the machine learning process especially deep
learning approaches which are data hungry. Therefore, to solve the problem, the
process of gathering data and its formation could be equally as vital as the
machine learning technique used. Herein, we are proposing a new data
acquisition technique for gathering real image data set which could be used as
an input for super resolution, noise cancellation and quality enhancement
techniques. We use a beam-splitter to capture the same scene by a low
resolution camera and a high resolution camera. Since we also release the raw
images, this large-scale dataset could be used for other tasks such as ISP
generation. Unlike current small-scale dataset used for these tasks, our
proposed dataset includes 11,421 pairs of low-resolution high-resolution images
of diverse scenes. To our knowledge this is the most complete dataset for super
resolution, ISP and image quality enhancement. The benchmarking result shows
how the new dataset can be successfully used to significantly improve the
quality of real-world image super resolution.
Related papers
- Rethinking Image Super-Resolution from Training Data Perspectives [54.28824316574355]
We investigate the understudied effect of the training data used for image super-resolution (SR)
With this, we propose an automated image evaluation pipeline.
We find that datasets with (i) low compression artifacts, (ii) high within-image diversity as judged by the number of different objects, and (iii) a large number of images from ImageNet or PASS all positively affect SR performance.
arXiv Detail & Related papers (2024-09-01T16:25:04Z) - Efficient High-Resolution Deep Learning: A Survey [90.76576712433595]
Cameras in modern devices such as smartphones, satellites and medical equipment are capable of capturing very high resolution images and videos.
Such high-resolution data often need to be processed by deep learning models for cancer detection, automated road navigation, weather prediction, surveillance, optimizing agricultural processes and many other applications.
Using high-resolution images and videos as direct inputs for deep learning models creates many challenges due to their high number of parameters, computation cost, inference latency and GPU memory consumption.
Several works in the literature propose better alternatives in order to deal with the challenges of high-resolution data and improve accuracy and speed while complying with hardware limitations
arXiv Detail & Related papers (2022-07-26T17:13:53Z) - Any-resolution Training for High-resolution Image Synthesis [55.19874755679901]
Generative models operate at fixed resolution, even though natural images come in a variety of sizes.
We argue that every pixel matters and create datasets with variable-size images, collected at their native resolutions.
We introduce continuous-scale training, a process that samples patches at random scales to train a new generator with variable output resolutions.
arXiv Detail & Related papers (2022-04-14T17:59:31Z) - Dual-reference Training Data Acquisition and CNN Construction for Image
Super-Resolution [33.388234549922025]
We propose a novel method to capture a large set of realistic LR$sim$HR image pairs using real cameras.
Our innovation is to shoot images displayed on an ultra-high quality screen at different resolutions.
Experimental results show that training a super-resolution CNN by our LR$sim$HR dataset has superior restoration performance than training it by existing datasets on real world images.
arXiv Detail & Related papers (2021-08-05T03:31:50Z) - A new public Alsat-2B dataset for single-image super-resolution [1.284647943889634]
The paper introduces a novel public remote sensing dataset (Alsat2B) of low and high spatial resolution images (10m and 2.5m respectively) for the single-image super-resolution task.
The high-resolution images are obtained through pan-sharpening.
The obtained results reveal that the proposed scheme is promising and highlight the challenges in the dataset.
arXiv Detail & Related papers (2021-03-21T10:47:38Z) - Exploiting Raw Images for Real-Scene Super-Resolution [105.18021110372133]
We study the problem of real-scene single image super-resolution to bridge the gap between synthetic data and real captured images.
We propose a method to generate more realistic training data by mimicking the imaging process of digital cameras.
We also develop a two-branch convolutional neural network to exploit the radiance information originally-recorded in raw images.
arXiv Detail & Related papers (2021-02-02T16:10:15Z) - Deep Burst Super-Resolution [165.90445859851448]
We propose a novel architecture for the burst super-resolution task.
Our network takes multiple noisy RAW images as input, and generates a denoised, super-resolved RGB image as output.
In order to enable training and evaluation on real-world data, we additionally introduce the BurstSR dataset.
arXiv Detail & Related papers (2021-01-26T18:57:21Z) - Learning When and Where to Zoom with Deep Reinforcement Learning [101.79271767464947]
We propose a reinforcement learning approach to identify when and where to use/acquire high resolution data conditioned on paired, cheap, low resolution images.
We conduct experiments on CIFAR10, CIFAR100, ImageNet and fMoW datasets where we use significantly less high resolution data while maintaining similar accuracy to models which use full high resolution images.
arXiv Detail & Related papers (2020-03-01T07:16:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.