NeRFs: The Search for the Best 3D Representation
- URL: http://arxiv.org/abs/2308.02751v2
- Date: Fri, 18 Aug 2023 18:57:39 GMT
- Title: NeRFs: The Search for the Best 3D Representation
- Authors: Ravi Ramamoorthi
- Abstract summary: We briefly review the three decades-long quest to find the best 3D representation for view synthesis and related problems.
We then describe new developments in terms of NeRF representations and make some observations and insights regarding the future of 3D representations.
- Score: 27.339452004523082
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Neural Radiance Fields or NeRFs have become the representation of choice for
problems in view synthesis or image-based rendering, as well as in many other
applications across computer graphics and vision, and beyond. At their core,
NeRFs describe a new representation of 3D scenes or 3D geometry. Instead of
meshes, disparity maps, multiplane images or even voxel grids, they represent
the scene as a continuous volume, with volumetric parameters like
view-dependent radiance and volume density obtained by querying a neural
network. The NeRF representation has now been widely used, with thousands of
papers extending or building on it every year, multiple authors and websites
providing overviews and surveys, and numerous industrial applications and
startup companies. In this article, we briefly review the NeRF representation,
and describe the three decades-long quest to find the best 3D representation
for view synthesis and related problems, culminating in the NeRF papers. We
then describe new developments in terms of NeRF representations and make some
observations and insights regarding the future of 3D representations.
Related papers
- Denoising Diffusion via Image-Based Rendering [54.20828696348574]
We introduce the first diffusion model able to perform fast, detailed reconstruction and generation of real-world 3D scenes.
First, we introduce a new neural scene representation, IB-planes, that can efficiently and accurately represent large 3D scenes.
Second, we propose a denoising-diffusion framework to learn a prior over this novel 3D scene representation, using only 2D images.
arXiv Detail & Related papers (2024-02-05T19:00:45Z) - BeyondPixels: A Comprehensive Review of the Evolution of Neural Radiance Fields [1.1531932979578041]
NeRF, short for Neural Radiance Fields, is a recent innovation that uses AI algorithms to create 3D objects from 2D images.
This survey reviews recent advances in NeRF and categorizes them according to their architectural designs.
arXiv Detail & Related papers (2023-06-05T16:10:21Z) - Neural Radiance Fields: Past, Present, and Future [0.0]
An attempt made by Mildenhall et al in their paper about NeRFs led to a boom in Computer Graphics, Robotics, Computer Vision, and the possible scope of High-Resolution Low Storage Augmented Reality and Virtual Reality-based 3D models have gained traction from res with more than 1000 preprints related to NeRFs published.
This survey provides the history of rendering, Implicit Learning, and NeRFs, the progression of research on NeRFs, and the potential applications and implications of NeRFs in today's world.
arXiv Detail & Related papers (2023-04-20T02:17:08Z) - 3D Concept Learning and Reasoning from Multi-View Images [96.3088005719963]
We introduce a new large-scale benchmark for 3D multi-view visual question answering (3DMV-VQA)
This dataset consists of approximately 5k scenes, 600k images, paired with 50k questions.
We propose a novel 3D concept learning and reasoning framework that seamlessly combines neural fields, 2D pre-trained vision-language models, and neural reasoning operators.
arXiv Detail & Related papers (2023-03-20T17:59:49Z) - NeRFMeshing: Distilling Neural Radiance Fields into
Geometrically-Accurate 3D Meshes [56.31855837632735]
We propose a compact and flexible architecture that enables easy 3D surface reconstruction from any NeRF-driven approach.
Our final 3D mesh is physically accurate and can be rendered in real time on an array of devices.
arXiv Detail & Related papers (2023-03-16T16:06:03Z) - NerfDiff: Single-image View Synthesis with NeRF-guided Distillation from
3D-aware Diffusion [107.67277084886929]
Novel view synthesis from a single image requires inferring occluded regions of objects and scenes whilst simultaneously maintaining semantic and physical consistency with the input.
We propose NerfDiff, which addresses this issue by distilling the knowledge of a 3D-aware conditional diffusion model (CDM) into NeRF through synthesizing and refining a set of virtual views at test time.
We further propose a novel NeRF-guided distillation algorithm that simultaneously generates 3D consistent virtual views from the CDM samples, and finetunes the NeRF based on the improved virtual views.
arXiv Detail & Related papers (2023-02-20T17:12:00Z) - NeRF: Neural Radiance Field in 3D Vision, A Comprehensive Review [19.67372661944804]
Neural Radiance Field (NeRF) has recently become a significant development in the field of Computer Vision.
NeRF models have found diverse applications in robotics, urban mapping, autonomous navigation, virtual reality/augmented reality, and more.
arXiv Detail & Related papers (2022-10-01T21:35:11Z) - Vision Transformer for NeRF-Based View Synthesis from a Single Input
Image [49.956005709863355]
We propose to leverage both the global and local features to form an expressive 3D representation.
To synthesize a novel view, we train a multilayer perceptron (MLP) network conditioned on the learned 3D representation to perform volume rendering.
Our method can render novel views from only a single input image and generalize across multiple object categories using a single model.
arXiv Detail & Related papers (2022-07-12T17:52:04Z) - DRaCoN -- Differentiable Rasterization Conditioned Neural Radiance
Fields for Articulated Avatars [92.37436369781692]
We present DRaCoN, a framework for learning full-body volumetric avatars.
It exploits the advantages of both the 2D and 3D neural rendering techniques.
Experiments on the challenging ZJU-MoCap and Human3.6M datasets indicate that DRaCoN outperforms state-of-the-art methods.
arXiv Detail & Related papers (2022-03-29T17:59:15Z) - Baking Neural Radiance Fields for Real-Time View Synthesis [41.07052395570522]
We present a method to train a NeRF, then precompute and store (i.e. "bake") it as a novel representation called a Sparse Neural Radiance Grid (SNeRG)
The resulting scene representation retains NeRF's ability to render fine geometric details and view-dependent appearance, is compact, and can be rendered in real-time.
arXiv Detail & Related papers (2021-03-26T17:59:52Z) - pixelNeRF: Neural Radiance Fields from One or Few Images [20.607712035278315]
pixelNeRF is a learning framework that predicts a continuous neural scene representation conditioned on one or few input images.
We conduct experiments on ShapeNet benchmarks for single image novel view synthesis tasks with held-out objects.
In all cases, pixelNeRF outperforms current state-of-the-art baselines for novel view synthesis and single image 3D reconstruction.
arXiv Detail & Related papers (2020-12-03T18:59:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.