SPARF: Large-Scale Learning of 3D Sparse Radiance Fields from Few Input
Images
- URL: http://arxiv.org/abs/2212.09100v3
- Date: Mon, 21 Aug 2023 12:53:09 GMT
- Title: SPARF: Large-Scale Learning of 3D Sparse Radiance Fields from Few Input
Images
- Authors: Abdullah Hamdi, Bernard Ghanem, Matthias Nie{\ss}ner
- Abstract summary: We present SPARF, a large-scale ShapeNet-based synthetic dataset for novel view synthesis.
We propose a novel pipeline (SuRFNet) that learns to generate sparse voxel radiance fields from only few views.
SuRFNet employs partial SRFs from few/one images and a specialized SRF loss to learn to generate high-quality sparse voxel radiance fields.
- Score: 62.64942825962934
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent advances in Neural Radiance Fields (NeRFs) treat the problem of novel
view synthesis as Sparse Radiance Field (SRF) optimization using sparse voxels
for efficient and fast rendering (plenoxels,InstantNGP). In order to leverage
machine learning and adoption of SRFs as a 3D representation, we present SPARF,
a large-scale ShapeNet-based synthetic dataset for novel view synthesis
consisting of $\sim$ 17 million images rendered from nearly 40,000 shapes at
high resolution (400 X 400 pixels). The dataset is orders of magnitude larger
than existing synthetic datasets for novel view synthesis and includes more
than one million 3D-optimized radiance fields with multiple voxel resolutions.
Furthermore, we propose a novel pipeline (SuRFNet) that learns to generate
sparse voxel radiance fields from only few views. This is done by using the
densely collected SPARF dataset and 3D sparse convolutions. SuRFNet employs
partial SRFs from few/one images and a specialized SRF loss to learn to
generate high-quality sparse voxel radiance fields that can be rendered from
novel views. Our approach achieves state-of-the-art results in the task of
unconstrained novel view synthesis based on few views on ShapeNet as compared
to recent baselines. The SPARF dataset is made public with the code and models
on the project website https://abdullahamdi.com/sparf/ .
Related papers
- Mesh2NeRF: Direct Mesh Supervision for Neural Radiance Field Representation and Generation [51.346733271166926]
Mesh2NeRF is an approach to derive ground-truth radiance fields from textured meshes for 3D generation tasks.
We validate the effectiveness of Mesh2NeRF across various tasks.
arXiv Detail & Related papers (2024-03-28T11:22:53Z) - HybridNeRF: Efficient Neural Rendering via Adaptive Volumetric Surfaces [71.1071688018433]
Neural radiance fields provide state-of-the-art view synthesis quality but tend to be slow to render.
We propose a method, HybridNeRF, that leverages the strengths of both representations by rendering most objects as surfaces.
We improve error rates by 15-30% while achieving real-time framerates (at least 36 FPS) for virtual-reality resolutions (2Kx2K)
arXiv Detail & Related papers (2023-12-05T22:04:49Z) - Multi-Space Neural Radiance Fields [74.46513422075438]
Existing Neural Radiance Fields (NeRF) methods suffer from the existence of reflective objects.
We propose a multi-space neural radiance field (MS-NeRF) that represents the scene using a group of feature fields in parallel sub-spaces.
Our approach significantly outperforms the existing single-space NeRF methods for rendering high-quality scenes.
arXiv Detail & Related papers (2023-05-07T13:11:07Z) - Learning Neural Duplex Radiance Fields for Real-Time View Synthesis [33.54507228895688]
We propose a novel approach to distill and bake NeRFs into highly efficient mesh-based neural representations.
We demonstrate the effectiveness and superiority of our approach via extensive experiments on a range of standard datasets.
arXiv Detail & Related papers (2023-04-20T17:59:52Z) - Cascaded and Generalizable Neural Radiance Fields for Fast View
Synthesis [35.035125537722514]
We present CG-NeRF, a cascade and generalizable neural radiance fields method for view synthesis.
We first train CG-NeRF on multiple 3D scenes of the DTU dataset.
We show that CG-NeRF outperforms state-of-the-art generalizable neural rendering methods on various synthetic and real datasets.
arXiv Detail & Related papers (2022-08-09T12:23:48Z) - RTMV: A Ray-Traced Multi-View Synthetic Dataset for Novel View Synthesis [104.53930611219654]
We present a large-scale synthetic dataset for novel view synthesis consisting of 300k images rendered from nearly 2000 complex scenes.
The dataset is orders of magnitude larger than existing synthetic datasets for novel view synthesis.
Using 4 distinct sources of high-quality 3D meshes, the scenes of our dataset exhibit challenging variations in camera views, lighting, shape, materials, and textures.
arXiv Detail & Related papers (2022-05-14T13:15:32Z) - Learning Neural Light Fields with Ray-Space Embedding Networks [51.88457861982689]
We propose a novel neural light field representation that is compact and directly predicts integrated radiance along rays.
Our method achieves state-of-the-art quality on dense forward-facing datasets such as the Stanford Light Field dataset.
arXiv Detail & Related papers (2021-12-02T18:59:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.