Defocus to focus: Photo-realistic bokeh rendering by fusing defocus and
radiance priors
- URL: http://arxiv.org/abs/2306.04506v1
- Date: Wed, 7 Jun 2023 15:15:13 GMT
- Title: Defocus to focus: Photo-realistic bokeh rendering by fusing defocus and
radiance priors
- Authors: Xianrui Luo, Juewen Peng, Ke Xian, Zijin Wu, Zhiguo Cao
- Abstract summary: Bokeh rendering mimics aesthetic shallow depth-of-field (DoF) in professional photography.
Existing methods suffer from simple flat background blur and blurred in-focus regions.
We present a Defocus to Focus (D2F) framework to learn realistic bokeh rendering.
- Score: 26.38833313692807
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We consider the problem of realistic bokeh rendering from a single
all-in-focus image. Bokeh rendering mimics aesthetic shallow depth-of-field
(DoF) in professional photography, but these visual effects generated by
existing methods suffer from simple flat background blur and blurred in-focus
regions, giving rise to unrealistic rendered results. In this work, we argue
that realistic bokeh rendering should (i) model depth relations and distinguish
in-focus regions, (ii) sustain sharp in-focus regions, and (iii) render
physically accurate Circle of Confusion (CoC). To this end, we present a
Defocus to Focus (D2F) framework to learn realistic bokeh rendering by fusing
defocus priors with the all-in-focus image and by implementing radiance priors
in layered fusion. Since no depth map is provided, we introduce defocus
hallucination to integrate depth by learning to focus. The predicted defocus
map implies the blur amount of bokeh and is used to guide weighted layered
rendering. In layered rendering, we fuse images blurred by different kernels
based on the defocus map. To increase the reality of the bokeh, we adopt
radiance virtualization to simulate scene radiance. The scene radiance used in
weighted layered rendering reassigns weights in the soft disk kernel to produce
the CoC. To ensure the sharpness of in-focus regions, we propose to fuse
upsampled bokeh images and original images. We predict the initial fusion mask
from our defocus map and refine the mask with a deep network. We evaluate our
model on a large-scale bokeh dataset. Extensive experiments show that our
approach is capable of rendering visually pleasing bokeh effects in complex
scenes. In particular, our solution receives the runner-up award in the AIM
2020 Rendering Realistic Bokeh Challenge.
Related papers
- Variable Aperture Bokeh Rendering via Customized Focal Plane Guidance [18.390543681127976]
The proposed method has achieved competitive state-of-the-art performance with only 4.4M parameters, which is much lighter than mainstream computational bokeh models.
The proposed method has achieved competitive state-of-the-art performance with only 4.4M parameters, which is much lighter than mainstream computational bokeh models.
arXiv Detail & Related papers (2024-10-18T12:04:23Z) - Towards Real-World Focus Stacking with Deep Learning [97.34754533628322]
We introduce a new dataset consisting of 94 high-resolution bursts of raw images with focus bracketing.
This dataset is used to train the first deep learning algorithm for focus stacking capable of handling bursts of sufficient length for real-world applications.
arXiv Detail & Related papers (2023-11-29T17:49:33Z) - Bokeh-Loss GAN: Multi-Stage Adversarial Training for Realistic
Edge-Aware Bokeh [3.8811606213997587]
We tackle the problem of monocular bokeh synthesis, where we attempt to render a shallow depth of field image from a single all-in-focus image.
Unlike in DSLR cameras, this effect can not be captured directly in mobile cameras due to the physical constraints of the mobile aperture.
We propose a network-based approach that is capable of rendering realistic monocular bokeh from single image inputs.
arXiv Detail & Related papers (2022-08-25T20:57:07Z) - Natural & Adversarial Bokeh Rendering via Circle-of-Confusion Predictive
Network [25.319666328268116]
Bokeh effect is a shallow depth-of-field phenomenon that blurs out-of-focus part in photography.
We study a totally new problem, i.e., natural & adversarial bokeh rendering.
We propose a hybrid alternative by taking the respective advantages of data-driven and physical-aware methods.
arXiv Detail & Related papers (2021-11-25T09:08:45Z) - Selective Light Field Refocusing for Camera Arrays Using Bokeh Rendering
and Superresolution [27.944215174962995]
We propose a light field refocusing method to improve the imaging quality of camera arrays.
In our method, the unfocused region (bokeh) is rendered by using a depth-based anisotropic filter.
Our method achieves superior visual performance with acceptable computational cost as compared to other state-of-the-art methods.
arXiv Detail & Related papers (2021-08-09T10:19:21Z) - Single image deep defocus estimation and its applications [82.93345261434943]
We train a deep neural network to classify image patches into one of the 20 levels of blurriness.
The trained model is used to determine the patch blurriness which is then refined by applying an iterative weighted guided filter.
The result is a defocus map that carries the information of the degree of blurriness for each pixel.
arXiv Detail & Related papers (2021-07-30T06:18:16Z) - Geometric Scene Refocusing [9.198471344145092]
We study the fine characteristics of images with a shallow depth-of-field in the context of focal stacks.
We identify in-focus pixels, dual-focus pixels, pixels that exhibit bokeh and spatially-varying blur kernels between focal slices.
We present a comprehensive algorithm for post-capture refocusing in a geometrically correct manner.
arXiv Detail & Related papers (2020-12-20T06:33:55Z) - AIM 2020 Challenge on Rendering Realistic Bokeh [95.87775182820518]
This paper reviews the second AIM realistic bokeh effect rendering challenge.
The goal was to learn a realistic shallow focus technique using a large-scale EBB! bokeh dataset.
The participants had to render bokeh effect based on only one single frame without any additional data from other cameras or sensors.
arXiv Detail & Related papers (2020-11-10T09:15:38Z) - Defocus Blur Detection via Depth Distillation [64.78779830554731]
We introduce depth information into DBD for the first time.
In detail, we learn the defocus blur from ground truth and the depth distilled from a well-trained depth estimation network.
Our approach outperforms 11 other state-of-the-art methods on two popular datasets.
arXiv Detail & Related papers (2020-07-16T04:58:09Z) - Rendering Natural Camera Bokeh Effect with Deep Learning [95.86933125733673]
Bokeh is an important artistic effect used to highlight the main object of interest on the photo.
Mobile cameras are unable to produce shallow depth-of-field photos due to a very small aperture diameter of their optics.
We propose to learn a realistic shallow focus technique directly from the photos produced by DSLR cameras.
arXiv Detail & Related papers (2020-06-10T07:28:06Z) - Rapid Whole Slide Imaging via Learning-based Two-shot Virtual
Autofocusing [57.90239401665367]
Whole slide imaging (WSI) is an emerging technology for digital pathology.
We propose the concept of textitvirtual autofocusing, which does not rely on mechanical adjustment to conduct refocusing.
arXiv Detail & Related papers (2020-03-14T13:40:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.