NeRFocus: Neural Radiance Field for 3D Synthetic Defocus
- URL: http://arxiv.org/abs/2203.05189v1
- Date: Thu, 10 Mar 2022 06:59:10 GMT
- Title: NeRFocus: Neural Radiance Field for 3D Synthetic Defocus
- Authors: Yinhuai Wang, Shuzhou Yang, Yujie Hu and Jian Zhang
- Abstract summary: This paper proposes a novel thin-lens-imaging-based NeRF framework that can directly render various 3D defocus effects.
NeRFocus can achieve various 3D defocus effects with adjustable camera pose, focus distance, and aperture size.
- Score: 3.7767536855440462
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural radiance fields (NeRF) bring a new wave for 3D interactive
experiences. However, as an important part of the immersive experiences, the
defocus effects have not been fully explored within NeRF. Some recent
NeRF-based methods generate 3D defocus effects in a post-process fashion by
utilizing multiplane technology. Still, they are either time-consuming or
memory-consuming. This paper proposes a novel thin-lens-imaging-based NeRF
framework that can directly render various 3D defocus effects, dubbed NeRFocus.
Unlike the pinhole, the thin lens refracts rays of a scene point, so its
imaging on the sensor plane is scattered as a circle of confusion (CoC). A
direct solution sampling enough rays to approximate this process is
computationally expensive. Instead, we propose to inverse the thin lens imaging
to explicitly model the beam path for each point on the sensor plane and
generalize this paradigm to the beam path of each pixel, then use the
frustum-based volume rendering to render each pixel's beam path. We further
design an efficient probabilistic training (p-training) strategy to simplify
the training process vastly. Extensive experiments demonstrate that our
NeRFocus can achieve various 3D defocus effects with adjustable camera pose,
focus distance, and aperture size. Existing NeRF can be regarded as our special
case by setting aperture size as zero to render large depth-of-field images.
Despite such merits, NeRFocus does not sacrifice NeRF's original performance
(e.g., training and inference time, parameter consumption, rendering quality),
which implies its great potential for broader application and further
improvement.
Related papers
- FBINeRF: Feature-Based Integrated Recurrent Network for Pinhole and Fisheye Neural Radiance Fields [13.014637091971842]
We propose adaptive GRUs with a flexible bundle-adjustment method adapted to radial distortions.
We show high-fidelity results for both pinhole-camera and fisheye-camera NeRFs.
arXiv Detail & Related papers (2024-08-03T23:11:20Z) - fNeRF: High Quality Radiance Fields from Practical Cameras [13.168695239732703]
We propose a modification to the ray casting that leverages the optics of lenses to enhance scene reconstruction in the presence of defocus blur.
We show that the proposed model matches the defocus blur behavior of practical cameras more closely than pinhole models.
arXiv Detail & Related papers (2024-06-15T13:33:06Z) - DOF-GS: Adjustable Depth-of-Field 3D Gaussian Splatting for Refocusing,Defocus Rendering and Blur Removal [42.427021878005405]
3D Gaussian Splatting techniques have recently advanced 3D scene reconstruction and novel view synthesis, achieving high-quality real-time rendering.
These approaches are inherently limited by the underlying pinhole camera assumption in modeling the images and hence only work for All-in-Focus (AiF) sharp image inputs.
This severely affects their applicability in real-world scenarios where images often exhibit defocus blur due to the limited depth-of-field (DOF) of imaging devices.
We introduce DOF-GS that allows for rendering adjustable DOF effects, removing defocus blur as well as refocusing of 3D scenes,
arXiv Detail & Related papers (2024-05-27T16:54:49Z) - Neural Radiance Fields with Torch Units [19.927273454898295]
Learning-based 3D reconstruction methods are widely used in industrial applications.
In this paper, we propose a novel inference pattern that encourages single camera ray possessing more contextual information.
To summarize, as a torchlight, a ray in our proposed method rendering a patch of image. Thus, we call the proposed method, Torch-NeRF.
arXiv Detail & Related papers (2024-04-03T10:08:55Z) - PERF: Panoramic Neural Radiance Field from a Single Panorama [109.31072618058043]
PERF is a novel view synthesis framework that trains a panoramic neural radiance field from a single panorama.
We propose a novel collaborative RGBD inpainting method and a progressive inpainting-and-erasing method to lift up a 360-degree 2D scene to a 3D scene.
Our PERF can be widely used for real-world applications, such as panorama-to-3D, text-to-3D, and 3D scene stylization applications.
arXiv Detail & Related papers (2023-10-25T17:59:01Z) - Multi-Space Neural Radiance Fields [74.46513422075438]
Existing Neural Radiance Fields (NeRF) methods suffer from the existence of reflective objects.
We propose a multi-space neural radiance field (MS-NeRF) that represents the scene using a group of feature fields in parallel sub-spaces.
Our approach significantly outperforms the existing single-space NeRF methods for rendering high-quality scenes.
arXiv Detail & Related papers (2023-05-07T13:11:07Z) - NerfDiff: Single-image View Synthesis with NeRF-guided Distillation from
3D-aware Diffusion [107.67277084886929]
Novel view synthesis from a single image requires inferring occluded regions of objects and scenes whilst simultaneously maintaining semantic and physical consistency with the input.
We propose NerfDiff, which addresses this issue by distilling the knowledge of a 3D-aware conditional diffusion model (CDM) into NeRF through synthesizing and refining a set of virtual views at test time.
We further propose a novel NeRF-guided distillation algorithm that simultaneously generates 3D consistent virtual views from the CDM samples, and finetunes the NeRF based on the improved virtual views.
arXiv Detail & Related papers (2023-02-20T17:12:00Z) - AligNeRF: High-Fidelity Neural Radiance Fields via Alignment-Aware
Training [100.33713282611448]
We conduct the first pilot study on training NeRF with high-resolution data.
We propose the corresponding solutions, including marrying the multilayer perceptron with convolutional layers.
Our approach is nearly free without introducing obvious training/testing costs.
arXiv Detail & Related papers (2022-11-17T17:22:28Z) - PDRF: Progressively Deblurring Radiance Field for Fast and Robust Scene
Reconstruction from Blurry Images [75.87721926918874]
We present Progressively Deblurring Radiance Field (PDRF)
PDRF is a novel approach to efficiently reconstruct high quality radiance fields from blurry images.
We show that PDRF is 15X faster than previous State-of-The-Art scene reconstruction methods.
arXiv Detail & Related papers (2022-08-17T03:42:29Z) - AR-NeRF: Unsupervised Learning of Depth and Defocus Effects from Natural
Images with Aperture Rendering Neural Radiance Fields [23.92262483956057]
Fully unsupervised 3D representation learning has gained attention owing to its advantages in data collection.
We propose an aperture rendering NeRF (AR-NeRF) which can utilize viewpoint and defocus cues in a unified manner.
We demonstrate the utility of AR-NeRF for unsupervised learning of the depth and defocus effects.
arXiv Detail & Related papers (2022-06-13T12:41:59Z) - Single image deep defocus estimation and its applications [82.93345261434943]
We train a deep neural network to classify image patches into one of the 20 levels of blurriness.
The trained model is used to determine the patch blurriness which is then refined by applying an iterative weighted guided filter.
The result is a defocus map that carries the information of the degree of blurriness for each pixel.
arXiv Detail & Related papers (2021-07-30T06:18:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.