MC-Blur: A Comprehensive Benchmark for Image Deblurring
- URL: http://arxiv.org/abs/2112.00234v3
- Date: Mon, 11 Sep 2023 10:13:21 GMT
- Title: MC-Blur: A Comprehensive Benchmark for Image Deblurring
- Authors: Kaihao Zhang, Tao Wang, Wenhan Luo, Boheng Chen, Wenqi Ren, Bjorn
Stenger, Wei Liu, Hongdong Li, Ming-Hsuan Yang
- Abstract summary: In most real-world images, blur is caused by different factors, e.g., motion and defocus.
We construct a new large-scale multi-cause image deblurring dataset (called MC-Blur)
Based on the MC-Blur dataset, we conduct extensive benchmarking studies to compare SOTA methods in different scenarios.
- Score: 127.6301230023318
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Blur artifacts can seriously degrade the visual quality of images, and
numerous deblurring methods have been proposed for specific scenarios. However,
in most real-world images, blur is caused by different factors, e.g., motion
and defocus. In this paper, we address how different deblurring methods perform
in the case of multiple types of blur. For in-depth performance evaluation, we
construct a new large-scale multi-cause image deblurring dataset (called
MC-Blur), including real-world and synthesized blurry images with mixed factors
of blurs. The images in the proposed MC-Blur dataset are collected using
different techniques: averaging sharp images captured by a 1000-fps high-speed
camera, convolving Ultra-High-Definition (UHD) sharp images with large-size
kernels, adding defocus to images, and real-world blurry images captured by
various camera models. Based on the MC-Blur dataset, we conduct extensive
benchmarking studies to compare SOTA methods in different scenarios, analyze
their efficiency, and investigate the built dataset's capacity. These
benchmarking results provide a comprehensive overview of the advantages and
limitations of current deblurring methods, and reveal the advances of our
dataset.
Related papers
- GS-Blur: A 3D Scene-Based Dataset for Realistic Image Deblurring [50.72230109855628]
We propose GS-Blur, a dataset of synthesized realistic blurry images created using a novel approach.
We first reconstruct 3D scenes from multi-view images using 3D Gaussian Splatting (3DGS), then render blurry images by moving the camera view along the randomly generated motion trajectories.
By adopting various camera trajectories in reconstructing our GS-Blur, our dataset contains realistic and diverse types of blur, offering a large-scale dataset that generalizes well to real-world blur.
arXiv Detail & Related papers (2024-10-31T06:17:16Z) - LMHaze: Intensity-aware Image Dehazing with a Large-scale Multi-intensity Real Haze Dataset [14.141433473509826]
We present LMHaze, a large-scale, high-quality real-world dataset.
LMHaze comprises paired hazy and haze-free images captured in diverse indoor and outdoor environments.
To better handle images with different haze intensities, we propose a mixture-of-experts model based on Mamba.
arXiv Detail & Related papers (2024-10-21T15:20:02Z) - A New Dataset and Framework for Real-World Blurred Images Super-Resolution [9.122275433854062]
We develop a new super-resolution dataset specifically tailored for blur images, named the Real-world Blur-kept Super-Resolution (ReBlurSR) dataset.
We propose Perceptual-Blur-adaptive Super-Resolution (PBaSR), which comprises two main modules: the Cross Disentanglement Module (CDM) and the Cross Fusion Module (CFM)
By integrating these two modules, PBaSR achieves commendable performance on both general and blur data without any additional inference and deployment cost.
arXiv Detail & Related papers (2024-07-20T14:07:03Z) - DaBiT: Depth and Blur informed Transformer for Joint Refocusing and Super-Resolution [4.332534893042983]
In many real-world scenarios, recorded videos suffer from accidental focus blur.
This paper introduces a framework optimised for focal deblurring (refocusing) and video super-resolution (VSR)
We achieve state-of-the-art results with an average PSNR performance over 1.9dB greater than comparable existing video restoration methods.
arXiv Detail & Related papers (2024-07-01T12:22:16Z) - Probabilistic Deep Metric Learning for Hyperspectral Image
Classification [91.5747859691553]
This paper proposes a probabilistic deep metric learning framework for hyperspectral image classification.
It aims to predict the category of each pixel for an image captured by hyperspectral sensors.
Our framework can be readily applied to existing hyperspectral image classification methods.
arXiv Detail & Related papers (2022-11-15T17:57:12Z) - Robustifying the Multi-Scale Representation of Neural Radiance Fields [86.69338893753886]
We present a robust multi-scale neural radiance fields representation approach to overcome both real-world imaging issues.
Our method handles multi-scale imaging effects and camera-pose estimation problems with NeRF-inspired approaches.
We demonstrate, with examples, that for an accurate neural representation of an object from day-to-day acquired multi-view images, it is crucial to have precise camera-pose estimates.
arXiv Detail & Related papers (2022-10-09T11:46:45Z) - DeepMultiCap: Performance Capture of Multiple Characters Using Sparse
Multiview Cameras [63.186486240525554]
DeepMultiCap is a novel method for multi-person performance capture using sparse multi-view cameras.
Our method can capture time varying surface details without the need of using pre-scanned template models.
arXiv Detail & Related papers (2021-05-01T14:32:13Z) - Deblurring by Realistic Blurring [110.54173799114785]
We propose a new method which combines two GAN models, i.e., a learning-to-blurr GAN (BGAN) and learning-to-DeBlur GAN (DBGAN)
The first model, BGAN, learns how to blur sharp images with unpaired sharp and blurry image sets, and then guides the second model, DBGAN, to learn how to correctly deblur such images.
As an additional contribution, this paper also introduces a Real-World Blurred Image (RWBI) dataset including diverse blurry images.
arXiv Detail & Related papers (2020-04-04T05:25:15Z) - MFFW: A new dataset for multi-focus image fusion [24.91107749755963]
This paper constructs a new dataset called MFF in the wild (MFFW)
It contains 19 pairs of multi-focus images collected on the Internet.
Experiments demonstrate that most state-of-the-art methods on MFFW dataset cannot robustly generate satisfactory fusion images.
arXiv Detail & Related papers (2020-02-12T03:35:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.