Neural Radiance Fields for the Real World: A Survey
- URL: http://arxiv.org/abs/2501.13104v1
- Date: Wed, 22 Jan 2025 18:59:10 GMT
- Title: Neural Radiance Fields for the Real World: A Survey
- Authors: Wenhui Xiao, Remi Chierchia, Rodrigo Santa Cruz, Xuesong Li, David Ahmedt-Aristizabal, Olivier Salvado, Clinton Fookes, Leo Lebrat,
- Abstract summary: Neural Radiance Fields (NeRFs) have remodeled 3D scene representation since release.
NeRFs can effectively reconstruct complex 3D scenes from 2D images.
This survey compiles key theoretical advancements and alternative representations.
- Score: 19.916224575959394
- License:
- Abstract: Neural Radiance Fields (NeRFs) have remodeled 3D scene representation since release. NeRFs can effectively reconstruct complex 3D scenes from 2D images, advancing different fields and applications such as scene understanding, 3D content generation, and robotics. Despite significant research progress, a thorough review of recent innovations, applications, and challenges is lacking. This survey compiles key theoretical advancements and alternative representations and investigates emerging challenges. It further explores applications on reconstruction, highlights NeRFs' impact on computer vision and robotics, and reviews essential datasets and toolkits. By identifying gaps in the literature, this survey discusses open challenges and offers directions for future research.
Related papers
- 3D Representation Methods: A Survey [0.0]
3D representation has experienced significant advancements, driven by the increasing demand for high-fidelity 3D models in various applications.
This review examines the development and current state of 3D representation methods, highlighting their research trajectories, innovations, strength and weakness.
arXiv Detail & Related papers (2024-10-09T02:01:05Z) - NeRF in Robotics: A Survey [95.11502610414803]
The recent emergence of neural implicit representations has introduced radical innovation to computer vision and robotics fields.
NeRF has sparked a trend because of the huge representational advantages, such as simplified mathematical models, compact environment storage, and continuous scene representations.
arXiv Detail & Related papers (2024-05-02T14:38:18Z) - Recent Trends in 3D Reconstruction of General Non-Rigid Scenes [104.07781871008186]
Reconstructing models of the real world, including 3D geometry, appearance, and motion of real scenes, is essential for computer graphics and computer vision.
It enables the synthesizing of photorealistic novel views, useful for the movie industry and AR/VR applications.
This state-of-the-art report (STAR) offers the reader a comprehensive summary of state-of-the-art techniques with monocular and multi-view inputs.
arXiv Detail & Related papers (2024-03-22T09:46:11Z) - Advances in 3D Generation: A Survey [54.95024616672868]
The field of 3D content generation is developing rapidly, enabling the creation of increasingly high-quality and diverse 3D models.
Specifically, we introduce the 3D representations that serve as the backbone for 3D generation.
We provide a comprehensive overview of the rapidly growing literature on generation methods, categorized by the type of algorithmic paradigms.
arXiv Detail & Related papers (2024-01-31T13:06:48Z) - NeRFs: The Search for the Best 3D Representation [27.339452004523082]
We briefly review the three decades-long quest to find the best 3D representation for view synthesis and related problems.
We then describe new developments in terms of NeRF representations and make some observations and insights regarding the future of 3D representations.
arXiv Detail & Related papers (2023-08-05T00:10:32Z) - BeyondPixels: A Comprehensive Review of the Evolution of Neural Radiance Fields [1.1531932979578041]
NeRF, short for Neural Radiance Fields, is a recent innovation that uses AI algorithms to create 3D objects from 2D images.
This survey reviews recent advances in NeRF and categorizes them according to their architectural designs.
arXiv Detail & Related papers (2023-06-05T16:10:21Z) - Neural Radiance Fields: Past, Present, and Future [0.0]
An attempt made by Mildenhall et al in their paper about NeRFs led to a boom in Computer Graphics, Robotics, Computer Vision, and the possible scope of High-Resolution Low Storage Augmented Reality and Virtual Reality-based 3D models have gained traction from res with more than 1000 preprints related to NeRFs published.
This survey provides the history of rendering, Implicit Learning, and NeRFs, the progression of research on NeRFs, and the potential applications and implications of NeRFs in today's world.
arXiv Detail & Related papers (2023-04-20T02:17:08Z) - NeRFMeshing: Distilling Neural Radiance Fields into
Geometrically-Accurate 3D Meshes [56.31855837632735]
We propose a compact and flexible architecture that enables easy 3D surface reconstruction from any NeRF-driven approach.
Our final 3D mesh is physically accurate and can be rendered in real time on an array of devices.
arXiv Detail & Related papers (2023-03-16T16:06:03Z) - Deep Generative Models on 3D Representations: A Survey [81.73385191402419]
Generative models aim to learn the distribution of observed data by generating new instances.
Recently, researchers started to shift focus from 2D to 3D space.
representing 3D data poses significantly greater challenges.
arXiv Detail & Related papers (2022-10-27T17:59:50Z) - NeRF: Neural Radiance Field in 3D Vision, A Comprehensive Review [19.67372661944804]
Neural Radiance Field (NeRF) has recently become a significant development in the field of Computer Vision.
NeRF models have found diverse applications in robotics, urban mapping, autonomous navigation, virtual reality/augmented reality, and more.
arXiv Detail & Related papers (2022-10-01T21:35:11Z) - NeRS: Neural Reflectance Surfaces for Sparse-view 3D Reconstruction in
the Wild [80.09093712055682]
We introduce a surface analog of implicit models called Neural Reflectance Surfaces (NeRS)
NeRS learns a neural shape representation of a closed surface that is diffeomorphic to a sphere, guaranteeing water-tight reconstructions.
We demonstrate that surface-based neural reconstructions enable learning from such data, outperforming volumetric neural rendering-based reconstructions.
arXiv Detail & Related papers (2021-10-14T17:59:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.