DOF-GS: Adjustable Depth-of-Field 3D Gaussian Splatting for Refocusing,Defocus Rendering and Blur Removal
- URL: http://arxiv.org/abs/2405.17351v1
- Date: Mon, 27 May 2024 16:54:49 GMT
- Title: DOF-GS: Adjustable Depth-of-Field 3D Gaussian Splatting for Refocusing,Defocus Rendering and Blur Removal
- Authors: Yujie Wang, Praneeth Chakravarthula, Baoquan Chen,
- Abstract summary: 3D Gaussian Splatting techniques have recently advanced 3D scene reconstruction and novel view synthesis, achieving high-quality real-time rendering.
These approaches are inherently limited by the underlying pinhole camera assumption in modeling the images and hence only work for All-in-Focus (AiF) sharp image inputs.
This severely affects their applicability in real-world scenarios where images often exhibit defocus blur due to the limited depth-of-field (DOF) of imaging devices.
We introduce DOF-GS that allows for rendering adjustable DOF effects, removing defocus blur as well as refocusing of 3D scenes,
- Score: 42.427021878005405
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: 3D Gaussian Splatting-based techniques have recently advanced 3D scene reconstruction and novel view synthesis, achieving high-quality real-time rendering. However, these approaches are inherently limited by the underlying pinhole camera assumption in modeling the images and hence only work for All-in-Focus (AiF) sharp image inputs. This severely affects their applicability in real-world scenarios where images often exhibit defocus blur due to the limited depth-of-field (DOF) of imaging devices. Additionally, existing 3D Gaussian Splatting (3DGS) methods also do not support rendering of DOF effects. To address these challenges, we introduce DOF-GS that allows for rendering adjustable DOF effects, removing defocus blur as well as refocusing of 3D scenes, all from multi-view images degraded by defocus blur. To this end, we re-imagine the traditional Gaussian Splatting pipeline by employing a finite aperture camera model coupled with explicit, differentiable defocus rendering guided by the Circle-of-Confusion (CoC). The proposed framework provides for dynamic adjustment of DOF effects by changing the aperture and focal distance of the underlying camera model on-demand. It also enables rendering varying DOF effects of 3D scenes post-optimization, and generating AiF images from defocused training images. Furthermore, we devise a joint optimization strategy to further enhance details in the reconstructed scenes by jointly optimizing rendered defocused and AiF images. Our experimental results indicate that DOF-GS produces high-quality sharp all-in-focus renderings conditioned on inputs compromised by defocus blur, with the training process incurring only a modest increase in GPU memory consumption. We further demonstrate the applications of the proposed method for adjustable defocus rendering and refocusing of the 3D scene from input images degraded by defocus blur.
Related papers
- EVER: Exact Volumetric Ellipsoid Rendering for Real-time View Synthesis [72.53316783628803]
We present Exact Volumetric Ellipsoid Rendering (EVER), a method for real-time differentiable emission-only volume rendering.
Unlike recentization based approach by 3D Gaussian Splatting (3DGS), our primitive based representation allows for exact volume rendering.
We show that our method is more accurate with blending issues than 3DGS and follow-up work on view rendering.
arXiv Detail & Related papers (2024-10-02T17:59:09Z) - Depth Estimation Based on 3D Gaussian Splatting Siamese Defocus [14.354405484663285]
We propose a self-supervised framework based on 3D Gaussian splatting and Siamese networks for depth estimation in 3D geometry.
The proposed framework has been validated on both artificially synthesized and real blurred datasets.
arXiv Detail & Related papers (2024-09-18T21:36:37Z) - Dynamic Neural Radiance Field From Defocused Monocular Video [15.789775912053507]
We propose D2RF, the first dynamic NeRF method designed to restore sharp novel views from defocused monocular videos.
We introduce layered Depth-of-Field (DoF) volume rendering to model the defocus blur and reconstruct a sharp NeRF supervised by defocused views.
Our method outperforms existing approaches in synthesizing all-in-focus novel views from defocus blur while maintaining spatial-temporal consistency in the scene.
arXiv Detail & Related papers (2024-07-08T03:46:56Z) - fNeRF: High Quality Radiance Fields from Practical Cameras [13.168695239732703]
We propose a modification to the ray casting that leverages the optics of lenses to enhance scene reconstruction in the presence of defocus blur.
We show that the proposed model matches the defocus blur behavior of practical cameras more closely than pinhole models.
arXiv Detail & Related papers (2024-06-15T13:33:06Z) - BAGS: Blur Agnostic Gaussian Splatting through Multi-Scale Kernel Modeling [32.493592776662005]
We analyze the robustness of Gaussian-Splatting-based methods against various image blur.
We propose Blur Agnostic Gaussian Splatting (BAGS) to address this issue.
BAGS introduces additional 2D modeling capacities such that a 3D-consistent and high quality scene can be reconstructed despite image-wise blur.
arXiv Detail & Related papers (2024-03-07T22:21:08Z) - VastGaussian: Vast 3D Gaussians for Large Scene Reconstruction [59.40711222096875]
We present VastGaussian, the first method for high-quality reconstruction and real-time rendering on large scenes based on 3D Gaussian Splatting.
Our approach outperforms existing NeRF-based methods and achieves state-of-the-art results on multiple large scene datasets.
arXiv Detail & Related papers (2024-02-27T11:40:50Z) - ExBluRF: Efficient Radiance Fields for Extreme Motion Blurred Images [58.24910105459957]
We present ExBluRF, a novel view synthesis method for extreme motion blurred images.
Our approach consists of two main components: 6-DOF camera trajectory-based motion blur formulation and voxel-based radiance fields.
Compared with the existing works, our approach restores much sharper 3D scenes with the order of 10 times less training time and GPU memory consumption.
arXiv Detail & Related papers (2023-09-16T11:17:25Z) - NeRFocus: Neural Radiance Field for 3D Synthetic Defocus [3.7767536855440462]
This paper proposes a novel thin-lens-imaging-based NeRF framework that can directly render various 3D defocus effects.
NeRFocus can achieve various 3D defocus effects with adjustable camera pose, focus distance, and aperture size.
arXiv Detail & Related papers (2022-03-10T06:59:10Z) - Single image deep defocus estimation and its applications [82.93345261434943]
We train a deep neural network to classify image patches into one of the 20 levels of blurriness.
The trained model is used to determine the patch blurriness which is then refined by applying an iterative weighted guided filter.
The result is a defocus map that carries the information of the degree of blurriness for each pixel.
arXiv Detail & Related papers (2021-07-30T06:18:16Z) - Universal and Flexible Optical Aberration Correction Using Deep-Prior
Based Deconvolution [51.274657266928315]
We propose a PSF aware plug-and-play deep network, which takes the aberrant image and PSF map as input and produces the latent high quality version via incorporating lens-specific deep priors.
Specifically, we pre-train a base model from a set of diverse lenses and then adapt it to a given lens by quickly refining the parameters.
arXiv Detail & Related papers (2021-04-07T12:00:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.