Towards Real-World Focus Stacking with Deep Learning
- URL: http://arxiv.org/abs/2311.17846v1
- Date: Wed, 29 Nov 2023 17:49:33 GMT
- Title: Towards Real-World Focus Stacking with Deep Learning
- Authors: Alexandre Araujo, Jean Ponce, Julien Mairal
- Abstract summary: We introduce a new dataset consisting of 94 high-resolution bursts of raw images with focus bracketing.
This dataset is used to train the first deep learning algorithm for focus stacking capable of handling bursts of sufficient length for real-world applications.
- Score: 97.34754533628322
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Focus stacking is widely used in micro, macro, and landscape photography to
reconstruct all-in-focus images from multiple frames obtained with focus
bracketing, that is, with shallow depth of field and different focus planes.
Existing deep learning approaches to the underlying multi-focus image fusion
problem have limited applicability to real-world imagery since they are
designed for very short image sequences (two to four images), and are typically
trained on small, low-resolution datasets either acquired by light-field
cameras or generated synthetically. We introduce a new dataset consisting of 94
high-resolution bursts of raw images with focus bracketing, with pseudo ground
truth computed from the data using state-of-the-art commercial software. This
dataset is used to train the first deep learning algorithm for focus stacking
capable of handling bursts of sufficient length for real-world applications.
Qualitative experiments demonstrate that it is on par with existing commercial
solutions in the long-burst, realistic regime while being significantly more
tolerant to noise. The code and dataset are available at
https://github.com/araujoalexandre/FocusStackingDataset.
Related papers
- Deep Phase Coded Image Prior [34.84063452418995]
Phase-coded imaging is a method to tackle tasks such as passive depth estimation and extended depth of field.
Most of the current deep learning-based methods for depth estimation or all-in-focus imaging require a training dataset with high-quality depth maps.
We propose a new method named "Deep Phase Coded Image Prior" (DPCIP) for jointly recovering the depth map and all-in-focus image.
arXiv Detail & Related papers (2024-04-05T05:58:40Z) - Exposure Bracketing is All You Need for Unifying Image Restoration and Enhancement Tasks [50.822601495422916]
We propose to utilize exposure bracketing photography to unify image restoration and enhancement tasks.
Due to the difficulty in collecting real-world pairs, we suggest a solution that first pre-trains the model with synthetic paired data.
In particular, a temporally modulated recurrent network (TMRNet) and self-supervised adaptation method are proposed.
arXiv Detail & Related papers (2024-01-01T14:14:35Z) - Learning Depth from Focus in the Wild [16.27391171541217]
We present a convolutional neural network-based depth estimation from single focal stacks.
Our method allows depth maps to be inferred in an end-to-end manner even with image alignment.
For the generalization of the proposed network, we develop a simulator to realistically reproduce the features of commercial cameras.
arXiv Detail & Related papers (2022-07-20T05:23:29Z) - Learning to Deblur using Light Field Generated and Real Defocus Images [4.926805108788465]
Defocus deblurring is a challenging task due to the spatially varying nature of defocus blur.
We propose a novel deep defocus deblurring network that leverages the strength and overcomes the shortcoming of light fields.
arXiv Detail & Related papers (2022-04-01T11:35:51Z) - Single image deep defocus estimation and its applications [82.93345261434943]
We train a deep neural network to classify image patches into one of the 20 levels of blurriness.
The trained model is used to determine the patch blurriness which is then refined by applying an iterative weighted guided filter.
The result is a defocus map that carries the information of the degree of blurriness for each pixel.
arXiv Detail & Related papers (2021-07-30T06:18:16Z) - Deep Autofocus for Synthetic Aperture Sonar [28.306713374371814]
In this letter, we demonstrate the potential of machine learning, specifically deep learning, to address the autofocus problem.
We formulate the problem as a self-supervised, phase error estimation task using a deep network we call Deep Autofocus.
Our results demonstrate Deep Autofocus can produce imagery that is perceptually as good as benchmark iterative techniques but at a substantially lower computational cost.
arXiv Detail & Related papers (2020-10-29T15:31:15Z) - Defocus Blur Detection via Depth Distillation [64.78779830554731]
We introduce depth information into DBD for the first time.
In detail, we learn the defocus blur from ground truth and the depth distilled from a well-trained depth estimation network.
Our approach outperforms 11 other state-of-the-art methods on two popular datasets.
arXiv Detail & Related papers (2020-07-16T04:58:09Z) - Real-MFF: A Large Realistic Multi-focus Image Dataset with Ground Truth [58.226535803985804]
We introduce a large and realistic multi-focus dataset called Real-MFF.
The dataset contains 710 pairs of source images with corresponding ground truth images.
We evaluate 10 typical multi-focus algorithms on this dataset for the purpose of illustration.
arXiv Detail & Related papers (2020-03-28T12:33:46Z) - Single Image Depth Estimation Trained via Depth from Defocus Cues [105.67073923825842]
Estimating depth from a single RGB image is a fundamental task in computer vision.
In this work, we rely, instead of different views, on depth from focus cues.
We present results that are on par with supervised methods on KITTI and Make3D datasets and outperform unsupervised learning approaches.
arXiv Detail & Related papers (2020-01-14T20:22:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.