Dual-Camera All-in-Focus Neural Radiance Fields
- URL: http://arxiv.org/abs/2504.16636v1
- Date: Wed, 23 Apr 2025 11:55:02 GMT
- Title: Dual-Camera All-in-Focus Neural Radiance Fields
- Authors: Xianrui Luo, Zijin Wu, Juewen Peng, Huiqiang Sun, Zhiguo Cao, Guosheng Lin,
- Abstract summary: We present the first framework capable of synthesizing the all-in-focus neural radiance field (NeRF) from inputs without manual refocusing.<n>We introduce the dual-camera from smartphones, where the ultra-wide camera has a wider depth-of-field (DoF) and the main camera possesses a higher resolution.<n>The dual camera pair saves the high-fidelity details from the main camera and uses the ultra-wide camera's deep DoF as reference for all-in-focus restoration.
- Score: 54.19848043744996
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present the first framework capable of synthesizing the all-in-focus neural radiance field (NeRF) from inputs without manual refocusing. Without refocusing, the camera will automatically focus on the fixed object for all views, and current NeRF methods typically using one camera fail due to the consistent defocus blur and a lack of sharp reference. To restore the all-in-focus NeRF, we introduce the dual-camera from smartphones, where the ultra-wide camera has a wider depth-of-field (DoF) and the main camera possesses a higher resolution. The dual camera pair saves the high-fidelity details from the main camera and uses the ultra-wide camera's deep DoF as reference for all-in-focus restoration. To this end, we first implement spatial warping and color matching to align the dual camera, followed by a defocus-aware fusion module with learnable defocus parameters to predict a defocus map and fuse the aligned camera pair. We also build a multi-view dataset that includes image pairs of the main and ultra-wide cameras in a smartphone. Extensive experiments on this dataset verify that our solution, termed DC-NeRF, can produce high-quality all-in-focus novel views and compares favorably against strong baselines quantitatively and qualitatively. We further show DoF applications of DC-NeRF with adjustable blur intensity and focal plane, including refocusing and split diopter.
Related papers
- fNeRF: High Quality Radiance Fields from Practical Cameras [13.168695239732703]
We propose a modification to the ray casting that leverages the optics of lenses to enhance scene reconstruction in the presence of defocus blur.
We show that the proposed model matches the defocus blur behavior of practical cameras more closely than pinhole models.
arXiv Detail & Related papers (2024-06-15T13:33:06Z) - Towards Real-World Focus Stacking with Deep Learning [97.34754533628322]
We introduce a new dataset consisting of 94 high-resolution bursts of raw images with focus bracketing.
This dataset is used to train the first deep learning algorithm for focus stacking capable of handling bursts of sufficient length for real-world applications.
arXiv Detail & Related papers (2023-11-29T17:49:33Z) - Camera-Independent Single Image Depth Estimation from Defocus Blur [6.516967182213821]
We show how several camera-related parameters affect the defocus blur using optical physics equations.
We create a synthetic dataset which can be used to test the camera independent performance of depth from defocus blur models.
arXiv Detail & Related papers (2023-11-21T23:14:42Z) - $\text{DC}^2$: Dual-Camera Defocus Control by Learning to Refocus [38.24734623691387]
We propose a system for defocus control for synthetically varying camera aperture, focus distance and arbitrary defocus effects.
Our key insight is to leverage real-world smartphone camera dataset by using image refocus as a proxy task for learning to control defocus.
We demonstrate creative post-capture defocus control enabled by our method, including tilt-shift and content-based defocus effects.
arXiv Detail & Related papers (2023-04-06T17:59:58Z) - Improving Fast Auto-Focus with Event Polarity [5.376511424333543]
This paper presents a new high-speed and accurate event-based focusing algorithm.
Experiments on the public event-based autofocus dataset (EAD) show the robustness of the model.
precise focus with less than one depth of focus is achieved within 0.004 seconds on our self-built high-speed focusing platform.
arXiv Detail & Related papers (2023-03-15T13:36:13Z) - Learning Dual-Pixel Alignment for Defocus Deblurring [73.80328094662976]
We propose a Dual-Pixel Alignment Network (DPANet) for defocus deblurring.
It is notably superior to state-of-the-art deblurring methods in reducing defocus blur while recovering visually plausible sharp structures and textures.
arXiv Detail & Related papers (2022-04-26T07:02:58Z) - Defocus Map Estimation and Deblurring from a Single Dual-Pixel Image [54.10957300181677]
We present a method that takes as input a single dual-pixel image, and simultaneously estimates the image's defocus map.
Our approach improves upon prior works for both defocus map estimation and blur removal, despite being entirely unsupervised.
arXiv Detail & Related papers (2021-10-12T00:09:07Z) - An End-to-End Autofocus Camera for Iris on the Move [48.14011526385088]
In this paper, we introduce a novel rapid autofocus camera for active refocusing of the iris area ofthe moving objects using a focus-tunable lens.
Our end-to-end computational algorithm can predict the best focus position from one single blurred image and generate a lens diopter control signal automatically.
The results demonstrate the advantages of our proposed camera for biometric perception in static and dynamic scenes.
arXiv Detail & Related papers (2021-06-29T03:00:39Z) - Defocus Deblurring Using Dual-Pixel Data [41.201653787083735]
Defocus blur arises in images that are captured with a shallow depth of field due to the use of a wide aperture.
We propose an effective defocus deblurring method that exploits data available on dual-pixel (DP) sensors found on most modern cameras.
arXiv Detail & Related papers (2020-05-01T10:38:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.