$\text{DC}^2$: Dual-Camera Defocus Control by Learning to Refocus
- URL: http://arxiv.org/abs/2304.03285v1
- Date: Thu, 6 Apr 2023 17:59:58 GMT
- Title: $\text{DC}^2$: Dual-Camera Defocus Control by Learning to Refocus
- Authors: Hadi Alzayer, Abdullah Abuolaim, Leung Chun Chan, Yang Yang, Ying Chen
Lou, Jia-Bin Huang, Abhishek Kar
- Abstract summary: We propose a system for defocus control for synthetically varying camera aperture, focus distance and arbitrary defocus effects.
Our key insight is to leverage real-world smartphone camera dataset by using image refocus as a proxy task for learning to control defocus.
We demonstrate creative post-capture defocus control enabled by our method, including tilt-shift and content-based defocus effects.
- Score: 38.24734623691387
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Smartphone cameras today are increasingly approaching the versatility and
quality of professional cameras through a combination of hardware and software
advancements. However, fixed aperture remains a key limitation, preventing
users from controlling the depth of field (DoF) of captured images. At the same
time, many smartphones now have multiple cameras with different fixed apertures
- specifically, an ultra-wide camera with wider field of view and deeper DoF
and a higher resolution primary camera with shallower DoF. In this work, we
propose $\text{DC}^2$, a system for defocus control for synthetically varying
camera aperture, focus distance and arbitrary defocus effects by fusing
information from such a dual-camera system. Our key insight is to leverage
real-world smartphone camera dataset by using image refocus as a proxy task for
learning to control defocus. Quantitative and qualitative evaluations on
real-world data demonstrate our system's efficacy where we outperform
state-of-the-art on defocus deblurring, bokeh rendering, and image refocus.
Finally, we demonstrate creative post-capture defocus control enabled by our
method, including tilt-shift and content-based defocus effects.
Related papers
- Towards Real-World Focus Stacking with Deep Learning [97.34754533628322]
We introduce a new dataset consisting of 94 high-resolution bursts of raw images with focus bracketing.
This dataset is used to train the first deep learning algorithm for focus stacking capable of handling bursts of sufficient length for real-world applications.
arXiv Detail & Related papers (2023-11-29T17:49:33Z) - Camera-Independent Single Image Depth Estimation from Defocus Blur [6.516967182213821]
We show how several camera-related parameters affect the defocus blur using optical physics equations.
We create a synthetic dataset which can be used to test the camera independent performance of depth from defocus blur models.
arXiv Detail & Related papers (2023-11-21T23:14:42Z) - Autofocus for Event Cameras [21.972388081563267]
We develop a novel event-based autofocus framework consisting of an event-specific focus measure called event rate (ER) and a robust search strategy called event-based golden search (EGS)
The experiments on this dataset and additional real-world scenarios demonstrated the superiority of our method over state-of-the-art approaches in terms of efficiency and accuracy.
arXiv Detail & Related papers (2022-03-23T10:46:33Z) - Defocus Map Estimation and Deblurring from a Single Dual-Pixel Image [54.10957300181677]
We present a method that takes as input a single dual-pixel image, and simultaneously estimates the image's defocus map.
Our approach improves upon prior works for both defocus map estimation and blur removal, despite being entirely unsupervised.
arXiv Detail & Related papers (2021-10-12T00:09:07Z) - An End-to-End Autofocus Camera for Iris on the Move [48.14011526385088]
In this paper, we introduce a novel rapid autofocus camera for active refocusing of the iris area ofthe moving objects using a focus-tunable lens.
Our end-to-end computational algorithm can predict the best focus position from one single blurred image and generate a lens diopter control signal automatically.
The results demonstrate the advantages of our proposed camera for biometric perception in static and dynamic scenes.
arXiv Detail & Related papers (2021-06-29T03:00:39Z) - Rendering Natural Camera Bokeh Effect with Deep Learning [95.86933125733673]
Bokeh is an important artistic effect used to highlight the main object of interest on the photo.
Mobile cameras are unable to produce shallow depth-of-field photos due to a very small aperture diameter of their optics.
We propose to learn a realistic shallow focus technique directly from the photos produced by DSLR cameras.
arXiv Detail & Related papers (2020-06-10T07:28:06Z) - Defocus Deblurring Using Dual-Pixel Data [41.201653787083735]
Defocus blur arises in images that are captured with a shallow depth of field due to the use of a wide aperture.
We propose an effective defocus deblurring method that exploits data available on dual-pixel (DP) sensors found on most modern cameras.
arXiv Detail & Related papers (2020-05-01T10:38:00Z) - Rapid Whole Slide Imaging via Learning-based Two-shot Virtual
Autofocusing [57.90239401665367]
Whole slide imaging (WSI) is an emerging technology for digital pathology.
We propose the concept of textitvirtual autofocusing, which does not rely on mechanical adjustment to conduct refocusing.
arXiv Detail & Related papers (2020-03-14T13:40:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.