DeclutterNeRF: Generative-Free 3D Scene Recovery for Occlusion Removal
- URL: http://arxiv.org/abs/2504.04679v1
- Date: Mon, 07 Apr 2025 02:22:08 GMT
- Title: DeclutterNeRF: Generative-Free 3D Scene Recovery for Occlusion Removal
- Authors: Wanzhou Liu, Zhexiao Xiong, Xinyu Li, Nathan Jacobs,
- Abstract summary: We introduce DeclutterNeRF, an occlusion removal method free from generative priors.<n>DeclutterNeRF significantly outperforms state-of-the-art methods on our proposed DeclutterSet.
- Score: 12.381139489267495
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent novel view synthesis (NVS) techniques, including Neural Radiance Fields (NeRF) and 3D Gaussian Splatting (3DGS) have greatly advanced 3D scene reconstruction with high-quality rendering and realistic detail recovery. Effectively removing occlusions while preserving scene details can further enhance the robustness and applicability of these techniques. However, existing approaches for object and occlusion removal predominantly rely on generative priors, which, despite filling the resulting holes, introduce new artifacts and blurriness. Moreover, existing benchmark datasets for evaluating occlusion removal methods lack realistic complexity and viewpoint variations. To address these issues, we introduce DeclutterSet, a novel dataset featuring diverse scenes with pronounced occlusions distributed across foreground, midground, and background, exhibiting substantial relative motion across viewpoints. We further introduce DeclutterNeRF, an occlusion removal method free from generative priors. DeclutterNeRF introduces joint multi-view optimization of learnable camera parameters, occlusion annealing regularization, and employs an explainable stochastic structural similarity loss, ensuring high-quality, artifact-free reconstructions from incomplete images. Experiments demonstrate that DeclutterNeRF significantly outperforms state-of-the-art methods on our proposed DeclutterSet, establishing a strong baseline for future research.
Related papers
- Diffusion-Guided Gaussian Splatting for Large-Scale Unconstrained 3D Reconstruction and Novel View Synthesis [22.767866875051013]
We propose GS-Diff, a novel 3DGS framework guided by a multi-view diffusion model to address limitations of current methods.<n>By generating pseudo-observations conditioned on multi-view inputs, our method transforms under-constrained 3D reconstruction problems into well-posed ones.<n> Experiments on four benchmarks demonstrate that GS-Diff consistently outperforms state-of-the-art baselines by significant margins.
arXiv Detail & Related papers (2025-04-02T17:59:46Z) - Decompositional Neural Scene Reconstruction with Generative Diffusion Prior [64.71091831762214]
Decompositional reconstruction of 3D scenes, with complete shapes and detailed texture, is intriguing for downstream applications.<n>Recent approaches incorporate semantic or geometric regularization to address this issue, but they suffer significant degradation in underconstrained areas.<n>We propose DP-Recon, which employs diffusion priors in the form of Score Distillation Sampling (SDS) to optimize the neural representation of each individual object under novel views.
arXiv Detail & Related papers (2025-03-19T02:11:31Z) - PF3plat: Pose-Free Feed-Forward 3D Gaussian Splatting [54.7468067660037]
PF3plat sets a new state-of-the-art across all benchmarks, supported by comprehensive ablation studies validating our design choices.
Our framework capitalizes on fast speed, scalability, and high-quality 3D reconstruction and view synthesis capabilities of 3DGS.
arXiv Detail & Related papers (2024-10-29T15:28:15Z) - Drantal-NeRF: Diffusion-Based Restoration for Anti-aliasing Neural Radiance Field [10.225323718645022]
Aliasing artifacts in renderings produced by Neural Radiance Field (NeRF) is a long-standing but complex issue.
We present a Diffusion-based restoration method for anti-aliasing Neural Radiance Field (Drantal-NeRF)
arXiv Detail & Related papers (2024-07-10T08:32:13Z) - From Chaos to Clarity: 3DGS in the Dark [28.232432162734437]
Noise in unprocessed raw images compromises accuracy of 3D scene representation.
3D Gaussian Splatting (3DGS) is particularly susceptible to this noise.
We introduce a novel self-supervised learning framework designed to reconstruct HDR 3DGS from noisy raw images.
arXiv Detail & Related papers (2024-06-12T15:00:16Z) - RaFE: Generative Radiance Fields Restoration [38.602849644666165]
NeRF (Neural Radiance Fields) has demonstrated tremendous potential in novel view synthesis and 3D reconstruction.
Previous methods for NeRF restoration are tailored for specific degradation type, ignoring the generality of restoration.
We propose a generic radiance fields restoration pipeline, named RaFE, which applies to various types of degradations.
arXiv Detail & Related papers (2024-04-04T17:59:50Z) - Robust 3D Shape Reconstruction in Zero-Shot from a Single Image in the Wild [22.82439286651921]
We propose a unified regression model that integrates segmentation and reconstruction, specifically designed for 3D shape reconstruction.<n>We also introduce a scalable data synthesis pipeline that simulates a wide range of variations in objects, occluders, and backgrounds.<n>Our training on our synthetic data enables the proposed model to achieve state-of-the-art zero-shot results on real-world images.
arXiv Detail & Related papers (2024-03-21T16:40:10Z) - Clean-NeRF: Reformulating NeRF to account for View-Dependent
Observations [67.54358911994967]
This paper proposes Clean-NeRF for accurate 3D reconstruction and novel view rendering in complex scenes.
Clean-NeRF can be implemented as a plug-in that can immediately benefit existing NeRF-based methods without additional input.
arXiv Detail & Related papers (2023-03-26T12:24:31Z) - NerfDiff: Single-image View Synthesis with NeRF-guided Distillation from
3D-aware Diffusion [107.67277084886929]
Novel view synthesis from a single image requires inferring occluded regions of objects and scenes whilst simultaneously maintaining semantic and physical consistency with the input.
We propose NerfDiff, which addresses this issue by distilling the knowledge of a 3D-aware conditional diffusion model (CDM) into NeRF through synthesizing and refining a set of virtual views at test time.
We further propose a novel NeRF-guided distillation algorithm that simultaneously generates 3D consistent virtual views from the CDM samples, and finetunes the NeRF based on the improved virtual views.
arXiv Detail & Related papers (2023-02-20T17:12:00Z) - CLONeR: Camera-Lidar Fusion for Occupancy Grid-aided Neural
Representations [77.90883737693325]
This paper proposes CLONeR, which significantly improves upon NeRF by allowing it to model large outdoor driving scenes observed from sparse input sensor views.
This is achieved by decoupling occupancy and color learning within the NeRF framework into separate Multi-Layer Perceptrons (MLPs) trained using LiDAR and camera data, respectively.
In addition, this paper proposes a novel method to build differentiable 3D Occupancy Grid Maps (OGM) alongside the NeRF model, and leverage this occupancy grid for improved sampling of points along a ray for rendering in metric space.
arXiv Detail & Related papers (2022-09-02T17:44:50Z) - Enhancement of Novel View Synthesis Using Omnidirectional Image
Completion [61.78187618370681]
We present a method for synthesizing novel views from a single 360-degree RGB-D image based on the neural radiance field (NeRF)
Experiments demonstrated that the proposed method can synthesize plausible novel views while preserving the features of the scene for both artificial and real-world data.
arXiv Detail & Related papers (2022-03-18T13:49:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.