Direct Handheld Burst Imaging to Simulated Defocus
- URL: http://arxiv.org/abs/2207.04175v1
- Date: Sat, 9 Jul 2022 01:59:36 GMT
- Title: Direct Handheld Burst Imaging to Simulated Defocus
- Authors: Meng-Lin Wu, Venkata Ravi Kiran Dayana, Hau Hwang
- Abstract summary: A shallow depth-of-field image keeps the subject in focus, and the foreground and background contexts blurred.
We present a learning-based method to synthesize the defocus blur in shallow depth-of-field images from handheld bursts.
Our method does not suffer from artifacts due to inaccurate or ambiguous depth estimation, and it is well-suited to portrait photography.
- Score: 1.7403133838762446
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A shallow depth-of-field image keeps the subject in focus, and the foreground
and background contexts blurred. This effect requires much larger lens
apertures than those of smartphone cameras. Conventional methods acquire RGB-D
images and blur image regions based on their depth. However, this approach is
not suitable for reflective or transparent surfaces, or finely detailed object
silhouettes, where the depth value is inaccurate or ambiguous.
We present a learning-based method to synthesize the defocus blur in shallow
depth-of-field images from handheld bursts acquired with a single small
aperture lens. Our deep learning model directly produces the shallow
depth-of-field image, avoiding explicit depth-based blurring. The simulated
aperture diameter equals the camera translation during burst acquisition. Our
method does not suffer from artifacts due to inaccurate or ambiguous depth
estimation, and it is well-suited to portrait photography.
Related papers
- Depth and DOF Cues Make A Better Defocus Blur Detector [27.33757097343283]
Defocus blur detection (DBD) separates in-focus and out-of-focus regions in an image.
Previous approaches mistakenly mistook homogeneous areas in focus for defocus blur regions.
We propose an approach called D-DFFNet, which incorporates depth and DOF cues in an implicit manner.
arXiv Detail & Related papers (2023-06-20T07:03:37Z) - Bokeh Rendering Based on Adaptive Depth Calibration Network [13.537088629080122]
Bokeh rendering is a popular technique used in photography to create an aesthetically pleasing effect.
Mobile phones are not able to capture natural shallow depth-of-field photos.
We propose a novel method for bokeh rendering using the Vision Transformer, a recent and powerful deep learning architecture.
arXiv Detail & Related papers (2023-02-21T16:33:51Z) - End-to-end Learning for Joint Depth and Image Reconstruction from
Diffracted Rotation [10.896567381206715]
We propose a novel end-to-end learning approach for depth from diffracted rotation.
Our approach requires a significantly less complex model and less training data, yet it is superior to existing methods in the task of monocular depth estimation.
arXiv Detail & Related papers (2022-04-14T16:14:37Z) - Deep Depth from Focal Stack with Defocus Model for Camera-Setting
Invariance [19.460887007137607]
We propose a learning-based depth from focus/defocus (DFF) which takes a focal stack as input for estimating scene depth.
We show that our method is robust against a synthetic-to-real domain gap, and exhibits state-of-the-art performance.
arXiv Detail & Related papers (2022-02-26T04:21:08Z) - Wild ToFu: Improving Range and Quality of Indirect Time-of-Flight Depth
with RGB Fusion in Challenging Environments [56.306567220448684]
We propose a new learning based end-to-end depth prediction network which takes noisy raw I-ToF signals as well as an RGB image.
We show more than 40% RMSE improvement on the final depth map compared to the baseline approach.
arXiv Detail & Related papers (2021-12-07T15:04:14Z) - Single image deep defocus estimation and its applications [82.93345261434943]
We train a deep neural network to classify image patches into one of the 20 levels of blurriness.
The trained model is used to determine the patch blurriness which is then refined by applying an iterative weighted guided filter.
The result is a defocus map that carries the information of the degree of blurriness for each pixel.
arXiv Detail & Related papers (2021-07-30T06:18:16Z) - A learning-based view extrapolation method for axial super-resolution [52.748944517480155]
Axial light field resolution refers to the ability to distinguish features at different depths by refocusing.
We propose a learning-based method to extrapolate novel views from axial volumes of sheared epipolar plane images.
arXiv Detail & Related papers (2021-03-11T07:22:13Z) - Progressive Depth Learning for Single Image Dehazing [56.71963910162241]
Existing dehazing methods often ignore the depth cues and fail in distant areas where heavier haze disturbs the visibility.
We propose a deep end-to-end model that iteratively estimates image depths and transmission maps.
Our approach benefits from explicitly modeling the inner relationship of image depth and transmission map, which is especially effective for distant hazy areas.
arXiv Detail & Related papers (2021-02-21T05:24:18Z) - Defocus Blur Detection via Depth Distillation [64.78779830554731]
We introduce depth information into DBD for the first time.
In detail, we learn the defocus blur from ground truth and the depth distilled from a well-trained depth estimation network.
Our approach outperforms 11 other state-of-the-art methods on two popular datasets.
arXiv Detail & Related papers (2020-07-16T04:58:09Z) - Rendering Natural Camera Bokeh Effect with Deep Learning [95.86933125733673]
Bokeh is an important artistic effect used to highlight the main object of interest on the photo.
Mobile cameras are unable to produce shallow depth-of-field photos due to a very small aperture diameter of their optics.
We propose to learn a realistic shallow focus technique directly from the photos produced by DSLR cameras.
arXiv Detail & Related papers (2020-06-10T07:28:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.