ConsistentNeRF: Enhancing Neural Radiance Fields with 3D Consistency for
Sparse View Synthesis
- URL: http://arxiv.org/abs/2305.11031v1
- Date: Thu, 18 May 2023 15:18:01 GMT
- Title: ConsistentNeRF: Enhancing Neural Radiance Fields with 3D Consistency for
Sparse View Synthesis
- Authors: Shoukang Hu and Kaichen Zhou and Kaiyu Li and Longhui Yu and Lanqing
Hong and Tianyang Hu and Zhenguo Li and Gim Hee Lee and Ziwei Liu
- Abstract summary: We propose ConsistentNeRF, a method that leverages depth information to regularize both multi-view and single-view 3D consistency among pixels.
Our approach can considerably enhance model performance in sparse view conditions, achieving improvements of up to 94% in PSNR, in SSIM, and 31% in LPIPS.
- Score: 99.06490355990354
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural Radiance Fields (NeRF) has demonstrated remarkable 3D reconstruction
capabilities with dense view images. However, its performance significantly
deteriorates under sparse view settings. We observe that learning the 3D
consistency of pixels among different views is crucial for improving
reconstruction quality in such cases. In this paper, we propose ConsistentNeRF,
a method that leverages depth information to regularize both multi-view and
single-view 3D consistency among pixels. Specifically, ConsistentNeRF employs
depth-derived geometry information and a depth-invariant loss to concentrate on
pixels that exhibit 3D correspondence and maintain consistent depth
relationships. Extensive experiments on recent representative works reveal that
our approach can considerably enhance model performance in sparse view
conditions, achieving improvements of up to 94% in PSNR, 76% in SSIM, and 31%
in LPIPS compared to the vanilla baselines across various benchmarks, including
DTU, NeRF Synthetic, and LLFF.
Related papers
- Towards Degradation-Robust Reconstruction in Generalizable NeRF [58.33351079982745]
Generalizable Radiance Field (GNeRF) across scenes has been proven to be an effective way to avoid per-scene optimization.
There has been limited research on the robustness of GNeRFs to different types of degradation present in the source images.
arXiv Detail & Related papers (2024-11-18T16:13:47Z) - PF3plat: Pose-Free Feed-Forward 3D Gaussian Splatting [54.7468067660037]
PF3plat sets a new state-of-the-art across all benchmarks, supported by comprehensive ablation studies validating our design choices.
Our framework capitalizes on fast speed, scalability, and high-quality 3D reconstruction and view synthesis capabilities of 3DGS.
arXiv Detail & Related papers (2024-10-29T15:28:15Z) - NeRSP: Neural 3D Reconstruction for Reflective Objects with Sparse Polarized Images [62.752710734332894]
NeRSP is a Neural 3D reconstruction technique for Reflective surfaces with Sparse Polarized images.
We derive photometric and geometric cues from the polarimetric image formation model and multiview azimuth consistency.
We achieve the state-of-the-art surface reconstruction results with only 6 views as input.
arXiv Detail & Related papers (2024-06-11T09:53:18Z) - GANeRF: Leveraging Discriminators to Optimize Neural Radiance Fields [12.92658687936068]
We take advantage of generative adversarial networks (GANs) to produce realistic images and use them to enhance realism in 3D scene reconstruction with NeRFs.
We learn the patch distribution of a scene using an adversarial discriminator, which provides feedback to the radiance field reconstruction.
rendering artifacts are repaired directly in the underlying 3D representation by imposing multi-view path rendering constraints.
arXiv Detail & Related papers (2023-06-09T17:12:35Z) - NerfDiff: Single-image View Synthesis with NeRF-guided Distillation from
3D-aware Diffusion [107.67277084886929]
Novel view synthesis from a single image requires inferring occluded regions of objects and scenes whilst simultaneously maintaining semantic and physical consistency with the input.
We propose NerfDiff, which addresses this issue by distilling the knowledge of a 3D-aware conditional diffusion model (CDM) into NeRF through synthesizing and refining a set of virtual views at test time.
We further propose a novel NeRF-guided distillation algorithm that simultaneously generates 3D consistent virtual views from the CDM samples, and finetunes the NeRF based on the improved virtual views.
arXiv Detail & Related papers (2023-02-20T17:12:00Z) - High-fidelity 3D GAN Inversion by Pseudo-multi-view Optimization [51.878078860524795]
We present a high-fidelity 3D generative adversarial network (GAN) inversion framework that can synthesize photo-realistic novel views.
Our approach enables high-fidelity 3D rendering from a single image, which is promising for various applications of AI-generated 3D content.
arXiv Detail & Related papers (2022-11-28T18:59:52Z) - Deblurred Neural Radiance Field with Physical Scene Priors [6.128295038453101]
This paper proposes a DP-NeRF framework for blurred images, which is constrained with two physical priors.
We present extensive experimental results for synthetic and real scenes with two types of blur: camera motion blur and defocus blur.
arXiv Detail & Related papers (2022-11-22T06:40:53Z) - Learning Neural Radiance Fields from Multi-View Geometry [1.1011268090482573]
We present a framework, called MVG-NeRF, that combines Multi-View Geometry algorithms and Neural Radiance Fields (NeRF) for image-based 3D reconstruction.
NeRF has revolutionized the field of implicit 3D representations, mainly due to a differentiable rendering formulation that enables high-quality and geometry-aware novel view synthesis.
arXiv Detail & Related papers (2022-10-24T08:53:35Z) - 360FusionNeRF: Panoramic Neural Radiance Fields with Joint Guidance [6.528382036284374]
We present a method to synthesize novel views from a single $360circ$ panorama image based on the neural radiance field (NeRF)
We propose 360FusionNeRF, a semi-supervised learning framework where we introduce geometric supervision and semantic consistency to guide the training process.
arXiv Detail & Related papers (2022-09-28T17:30:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.