Real-MFF: A Large Realistic Multi-focus Image Dataset with Ground Truth
- URL: http://arxiv.org/abs/2003.12779v3
- Date: Fri, 28 Aug 2020 11:25:18 GMT
- Title: Real-MFF: A Large Realistic Multi-focus Image Dataset with Ground Truth
- Authors: Juncheng Zhang, Qingmin Liao, Shaojun Liu, Haoyu Ma, Wenming Yang,
Jing-Hao Xue
- Abstract summary: We introduce a large and realistic multi-focus dataset called Real-MFF.
The dataset contains 710 pairs of source images with corresponding ground truth images.
We evaluate 10 typical multi-focus algorithms on this dataset for the purpose of illustration.
- Score: 58.226535803985804
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-focus image fusion, a technique to generate an all-in-focus image from
two or more partially-focused source images, can benefit many computer vision
tasks. However, currently there is no large and realistic dataset to perform
convincing evaluation and comparison of algorithms in multi-focus image fusion.
Moreover, it is difficult to train a deep neural network for multi-focus image
fusion without a suitable dataset. In this letter, we introduce a large and
realistic multi-focus dataset called Real-MFF, which contains 710 pairs of
source images with corresponding ground truth images. The dataset is generated
by light field images, and both the source images and the ground truth images
are realistic. To serve as both a well-established benchmark for existing
multi-focus image fusion algorithms and an appropriate training dataset for
future development of deep-learning-based methods, the dataset contains a
variety of scenes, including buildings, plants, humans, shopping malls, squares
and so on. We also evaluate 10 typical multi-focus algorithms on this dataset
for the purpose of illustration.
Related papers
- Contrasting Deepfakes Diffusion via Contrastive Learning and Global-Local Similarities [88.398085358514]
Contrastive Deepfake Embeddings (CoDE) is a novel embedding space specifically designed for deepfake detection.
CoDE is trained via contrastive learning by additionally enforcing global-local similarities.
arXiv Detail & Related papers (2024-07-29T18:00:10Z) - Towards Real-World Focus Stacking with Deep Learning [97.34754533628322]
We introduce a new dataset consisting of 94 high-resolution bursts of raw images with focus bracketing.
This dataset is used to train the first deep learning algorithm for focus stacking capable of handling bursts of sufficient length for real-world applications.
arXiv Detail & Related papers (2023-11-29T17:49:33Z) - Bridging the Gap between Multi-focus and Multi-modal: A Focused
Integration Framework for Multi-modal Image Fusion [5.417493475406649]
Multi-modal image fusion (MMIF) integrates valuable information from different modality images into a fused one.
This paper proposes a MMIF framework for joint focused integration and modalities information extraction.
The proposed algorithm can surpass the state-of-the-art methods in visual perception and quantitative evaluation.
arXiv Detail & Related papers (2023-11-03T12:58:39Z) - EDIS: Entity-Driven Image Search over Multimodal Web Content [95.40238328527931]
We introduce textbfEntity-textbfDriven textbfImage textbfSearch (EDIS), a dataset for cross-modal image search in the news domain.
EDIS consists of 1 million web images from actual search engine results and curated datasets, with each image paired with a textual description.
arXiv Detail & Related papers (2023-05-23T02:59:19Z) - Learning Enriched Features for Fast Image Restoration and Enhancement [166.17296369600774]
This paper presents a holistic goal of maintaining spatially-precise high-resolution representations through the entire network.
We learn an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details.
Our approach achieves state-of-the-art results for a variety of image processing tasks, including defocus deblurring, image denoising, super-resolution, and image enhancement.
arXiv Detail & Related papers (2022-04-19T17:59:45Z) - TransFuse: A Unified Transformer-based Image Fusion Framework using
Self-supervised Learning [5.849513679510834]
Image fusion is a technique to integrate information from multiple source images with complementary information to improve the richness of a single image.
Two-stage methods avoid the need of large amount of task-specific training data by training encoder-decoder network on large natural image datasets.
We propose a destruction-reconstruction based self-supervised training scheme to encourage the network to learn task-specific features.
arXiv Detail & Related papers (2022-01-19T07:30:44Z) - Two-shot Spatially-varying BRDF and Shape Estimation [89.29020624201708]
We propose a novel deep learning architecture with a stage-wise estimation of shape and SVBRDF.
We create a large-scale synthetic training dataset with domain-randomized geometry and realistic materials.
Experiments on both synthetic and real-world datasets show that our network trained on a synthetic dataset can generalize well to real-world images.
arXiv Detail & Related papers (2020-04-01T12:56:13Z) - MFFW: A new dataset for multi-focus image fusion [24.91107749755963]
This paper constructs a new dataset called MFF in the wild (MFFW)
It contains 19 pairs of multi-focus images collected on the Internet.
Experiments demonstrate that most state-of-the-art methods on MFFW dataset cannot robustly generate satisfactory fusion images.
arXiv Detail & Related papers (2020-02-12T03:35:37Z) - FFusionCGAN: An end-to-end fusion method for few-focus images using
conditional GAN in cytopathological digital slides [0.0]
Multi-focus image fusion technologies compress different focus depth images into an image in which most objects are in focus.
This paper proposes a novel method for generating fused images from single-focus or few-focus images based on conditional generative adversarial network (GAN)
By integrating the network into the generative model, the quality of the generated fused images is effectively improved.
arXiv Detail & Related papers (2020-01-03T02:13:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.