Selective Light Field Refocusing for Camera Arrays Using Bokeh Rendering
and Superresolution
- URL: http://arxiv.org/abs/2108.03918v1
- Date: Mon, 9 Aug 2021 10:19:21 GMT
- Title: Selective Light Field Refocusing for Camera Arrays Using Bokeh Rendering
and Superresolution
- Authors: Yingqian Wang, Jungang Yang, Yulan Guo, Chao Xiao, Wei An
- Abstract summary: We propose a light field refocusing method to improve the imaging quality of camera arrays.
In our method, the unfocused region (bokeh) is rendered by using a depth-based anisotropic filter.
Our method achieves superior visual performance with acceptable computational cost as compared to other state-of-the-art methods.
- Score: 27.944215174962995
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Camera arrays provide spatial and angular information within a single
snapshot. With refocusing methods, focal planes can be altered after exposure.
In this letter, we propose a light field refocusing method to improve the
imaging quality of camera arrays. In our method, the disparity is first
estimated. Then, the unfocused region (bokeh) is rendered by using a
depth-based anisotropic filter. Finally, the refocused image is produced by a
reconstruction-based superresolution approach where the bokeh image is used as
a regularization term. Our method can selectively refocus images with focused
region being superresolved and bokeh being aesthetically rendered. Our method
also enables postadjustment of depth of field. We conduct experiments on both
public and self-developed datasets. Our method achieves superior visual
performance with acceptable computational cost as compared to other
state-of-the-art methods. Code is available at
https://github.com/YingqianWang/Selective-LF-Refocusing.
Related papers
- Variable Aperture Bokeh Rendering via Customized Focal Plane Guidance [18.390543681127976]
The proposed method has achieved competitive state-of-the-art performance with only 4.4M parameters, which is much lighter than mainstream computational bokeh models.
The proposed method has achieved competitive state-of-the-art performance with only 4.4M parameters, which is much lighter than mainstream computational bokeh models.
arXiv Detail & Related papers (2024-10-18T12:04:23Z) - Towards Real-World Focus Stacking with Deep Learning [97.34754533628322]
We introduce a new dataset consisting of 94 high-resolution bursts of raw images with focus bracketing.
This dataset is used to train the first deep learning algorithm for focus stacking capable of handling bursts of sufficient length for real-world applications.
arXiv Detail & Related papers (2023-11-29T17:49:33Z) - ExBluRF: Efficient Radiance Fields for Extreme Motion Blurred Images [58.24910105459957]
We present ExBluRF, a novel view synthesis method for extreme motion blurred images.
Our approach consists of two main components: 6-DOF camera trajectory-based motion blur formulation and voxel-based radiance fields.
Compared with the existing works, our approach restores much sharper 3D scenes with the order of 10 times less training time and GPU memory consumption.
arXiv Detail & Related papers (2023-09-16T11:17:25Z) - Defocus to focus: Photo-realistic bokeh rendering by fusing defocus and
radiance priors [26.38833313692807]
Bokeh rendering mimics aesthetic shallow depth-of-field (DoF) in professional photography.
Existing methods suffer from simple flat background blur and blurred in-focus regions.
We present a Defocus to Focus (D2F) framework to learn realistic bokeh rendering.
arXiv Detail & Related papers (2023-06-07T15:15:13Z) - Bokeh Rendering Based on Adaptive Depth Calibration Network [13.537088629080122]
Bokeh rendering is a popular technique used in photography to create an aesthetically pleasing effect.
Mobile phones are not able to capture natural shallow depth-of-field photos.
We propose a novel method for bokeh rendering using the Vision Transformer, a recent and powerful deep learning architecture.
arXiv Detail & Related papers (2023-02-21T16:33:51Z) - Learning Depth from Focus in the Wild [16.27391171541217]
We present a convolutional neural network-based depth estimation from single focal stacks.
Our method allows depth maps to be inferred in an end-to-end manner even with image alignment.
For the generalization of the proposed network, we develop a simulator to realistically reproduce the features of commercial cameras.
arXiv Detail & Related papers (2022-07-20T05:23:29Z) - Single image deep defocus estimation and its applications [82.93345261434943]
We train a deep neural network to classify image patches into one of the 20 levels of blurriness.
The trained model is used to determine the patch blurriness which is then refined by applying an iterative weighted guided filter.
The result is a defocus map that carries the information of the degree of blurriness for each pixel.
arXiv Detail & Related papers (2021-07-30T06:18:16Z) - Geometric Scene Refocusing [9.198471344145092]
We study the fine characteristics of images with a shallow depth-of-field in the context of focal stacks.
We identify in-focus pixels, dual-focus pixels, pixels that exhibit bokeh and spatially-varying blur kernels between focal slices.
We present a comprehensive algorithm for post-capture refocusing in a geometrically correct manner.
arXiv Detail & Related papers (2020-12-20T06:33:55Z) - Defocus Blur Detection via Depth Distillation [64.78779830554731]
We introduce depth information into DBD for the first time.
In detail, we learn the defocus blur from ground truth and the depth distilled from a well-trained depth estimation network.
Our approach outperforms 11 other state-of-the-art methods on two popular datasets.
arXiv Detail & Related papers (2020-07-16T04:58:09Z) - Rendering Natural Camera Bokeh Effect with Deep Learning [95.86933125733673]
Bokeh is an important artistic effect used to highlight the main object of interest on the photo.
Mobile cameras are unable to produce shallow depth-of-field photos due to a very small aperture diameter of their optics.
We propose to learn a realistic shallow focus technique directly from the photos produced by DSLR cameras.
arXiv Detail & Related papers (2020-06-10T07:28:06Z) - Learning Light Field Angular Super-Resolution via a Geometry-Aware
Network [101.59693839475783]
We propose an end-to-end learning-based approach aiming at angularly super-resolving a sparsely-sampled light field with a large baseline.
Our method improves the PSNR of the second best method up to 2 dB in average, while saves the execution time 48$times$.
arXiv Detail & Related papers (2020-02-26T02:36:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.