RaFE: Generative Radiance Fields Restoration
- URL: http://arxiv.org/abs/2404.03654v2
- Date: Sun, 7 Apr 2024 07:20:31 GMT
- Title: RaFE: Generative Radiance Fields Restoration
- Authors: Zhongkai Wu, Ziyu Wan, Jing Zhang, Jing Liao, Dong Xu,
- Abstract summary: NeRF (Neural Radiance Fields) has demonstrated tremendous potential in novel view synthesis and 3D reconstruction.
Previous methods for NeRF restoration are tailored for specific degradation type, ignoring the generality of restoration.
We propose a generic radiance fields restoration pipeline, named RaFE, which applies to various types of degradations.
- Score: 38.602849644666165
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: NeRF (Neural Radiance Fields) has demonstrated tremendous potential in novel view synthesis and 3D reconstruction, but its performance is sensitive to input image quality, which struggles to achieve high-fidelity rendering when provided with low-quality sparse input viewpoints. Previous methods for NeRF restoration are tailored for specific degradation type, ignoring the generality of restoration. To overcome this limitation, we propose a generic radiance fields restoration pipeline, named RaFE, which applies to various types of degradations, such as low resolution, blurriness, noise, compression artifacts, or their combinations. Our approach leverages the success of off-the-shelf 2D restoration methods to recover the multi-view images individually. Instead of reconstructing a blurred NeRF by averaging inconsistencies, we introduce a novel approach using Generative Adversarial Networks (GANs) for NeRF generation to better accommodate the geometric and appearance inconsistencies present in the multi-view images. Specifically, we adopt a two-level tri-plane architecture, where the coarse level remains fixed to represent the low-quality NeRF, and a fine-level residual tri-plane to be added to the coarse level is modeled as a distribution with GAN to capture potential variations in restoration. We validate RaFE on both synthetic and real cases for various restoration tasks, demonstrating superior performance in both quantitative and qualitative evaluations, surpassing other 3D restoration methods specific to single task. Please see our project website https://zkaiwu.github.io/RaFE-Project/.
Related papers
- Towards Degradation-Robust Reconstruction in Generalizable NeRF [58.33351079982745]
Generalizable Radiance Field (GNeRF) across scenes has been proven to be an effective way to avoid per-scene optimization.
There has been limited research on the robustness of GNeRFs to different types of degradation present in the source images.
arXiv Detail & Related papers (2024-11-18T16:13:47Z) - Drantal-NeRF: Diffusion-Based Restoration for Anti-aliasing Neural Radiance Field [10.225323718645022]
Aliasing artifacts in renderings produced by Neural Radiance Field (NeRF) is a long-standing but complex issue.
We present a Diffusion-based restoration method for anti-aliasing Neural Radiance Field (Drantal-NeRF)
arXiv Detail & Related papers (2024-07-10T08:32:13Z) - GTR: Improving Large 3D Reconstruction Models through Geometry and Texture Refinement [51.97726804507328]
We propose a novel approach for 3D mesh reconstruction from multi-view images.
Our method takes inspiration from large reconstruction models that use a transformer-based triplane generator and a Neural Radiance Field (NeRF) model trained on multi-view images.
arXiv Detail & Related papers (2024-06-09T05:19:24Z) - RustNeRF: Robust Neural Radiance Field with Low-Quality Images [29.289408956815727]
We present RustNeRF for real-world high-quality Neural Radiance Fields (NeRF)
To improve NeRF's robustness under real-world inputs, we train a 3D-aware preprocessing network that incorporates real-world degradation modeling.
We propose a novel implicit multi-view guidance to address information loss during image degradation and restoration.
arXiv Detail & Related papers (2024-01-06T16:54:02Z) - ReconFusion: 3D Reconstruction with Diffusion Priors [104.73604630145847]
We present ReconFusion to reconstruct real-world scenes using only a few photos.
Our approach leverages a diffusion prior for novel view synthesis, trained on synthetic and multiview datasets.
Our method synthesizes realistic geometry and texture in underconstrained regions while preserving the appearance of observed regions.
arXiv Detail & Related papers (2023-12-05T18:59:58Z) - GANeRF: Leveraging Discriminators to Optimize Neural Radiance Fields [12.92658687936068]
We take advantage of generative adversarial networks (GANs) to produce realistic images and use them to enhance realism in 3D scene reconstruction with NeRFs.
We learn the patch distribution of a scene using an adversarial discriminator, which provides feedback to the radiance field reconstruction.
rendering artifacts are repaired directly in the underlying 3D representation by imposing multi-view path rendering constraints.
arXiv Detail & Related papers (2023-06-09T17:12:35Z) - DehazeNeRF: Multiple Image Haze Removal and 3D Shape Reconstruction
using Neural Radiance Fields [56.30120727729177]
We introduce DehazeNeRF as a framework that robustly operates in hazy conditions.
We demonstrate successful multi-view haze removal, novel view synthesis, and 3D shape reconstruction where existing approaches fail.
arXiv Detail & Related papers (2023-03-20T18:03:32Z) - Learning Neural Radiance Fields from Multi-View Geometry [1.1011268090482573]
We present a framework, called MVG-NeRF, that combines Multi-View Geometry algorithms and Neural Radiance Fields (NeRF) for image-based 3D reconstruction.
NeRF has revolutionized the field of implicit 3D representations, mainly due to a differentiable rendering formulation that enables high-quality and geometry-aware novel view synthesis.
arXiv Detail & Related papers (2022-10-24T08:53:35Z) - Implicit Subspace Prior Learning for Dual-Blind Face Restoration [66.67059961379923]
A novel implicit subspace prior learning (ISPL) framework is proposed as a generic solution to dual-blind face restoration.
Experimental results demonstrate significant perception-distortion improvement of ISPL against existing state-of-the-art methods.
arXiv Detail & Related papers (2020-10-12T08:04:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.