Survey on Fundamental Deep Learning 3D Reconstruction Techniques
- URL: http://arxiv.org/abs/2407.08137v1
- Date: Thu, 11 Jul 2024 02:30:05 GMT
- Title: Survey on Fundamental Deep Learning 3D Reconstruction Techniques
- Authors: Yonge Bai, LikHang Wong, TszYin Twan,
- Abstract summary: This survey aims to investigate fundamental deep learning (DL) based 3D reconstruction techniques that produce photo-realistic 3D models and scenes.
We dissect the underlying algorithms, evaluate their strengths and tradeoffs, and project future research trajectories in this rapidly evolving field.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This survey aims to investigate fundamental deep learning (DL) based 3D reconstruction techniques that produce photo-realistic 3D models and scenes, highlighting Neural Radiance Fields (NeRFs), Latent Diffusion Models (LDM), and 3D Gaussian Splatting. We dissect the underlying algorithms, evaluate their strengths and tradeoffs, and project future research trajectories in this rapidly evolving field. We provide a comprehensive overview of the fundamental in DL-driven 3D scene reconstruction, offering insights into their potential applications and limitations.
Related papers
- 3D Representation Methods: A Survey [0.0]
3D representation has experienced significant advancements, driven by the increasing demand for high-fidelity 3D models in various applications.
This review examines the development and current state of 3D representation methods, highlighting their research trajectories, innovations, strength and weakness.
arXiv Detail & Related papers (2024-10-09T02:01:05Z) - Diffusion Models in 3D Vision: A Survey [11.116658321394755]
We review the state-of-the-art approaches that leverage diffusion models for 3D visual tasks.
These approaches include 3D object generation, shape completion, point cloud reconstruction, and scene understanding.
We discuss potential solutions, including improving computational efficiency, enhancing multimodal fusion, and exploring the use of large-scale pretraining.
arXiv Detail & Related papers (2024-10-07T04:12:23Z) - Enhancing Generalizability of Representation Learning for Data-Efficient 3D Scene Understanding [50.448520056844885]
We propose a generative Bayesian network to produce diverse synthetic scenes with real-world patterns.
A series of experiments robustly display our method's consistent superiority over existing state-of-the-art pre-training approaches.
arXiv Detail & Related papers (2024-06-17T07:43:53Z) - Gaussian Splatting: 3D Reconstruction and Novel View Synthesis, a Review [0.08823202672546056]
This review paper focuses on state-of-the-art techniques for 3D reconstruction, including the generation of novel, unseen views.
An overview of recent developments in the Gaussian Splatting method is provided, covering input types, model structures, output representations, and training strategies.
arXiv Detail & Related papers (2024-05-06T12:32:38Z) - 3D Face Reconstruction Using A Spectral-Based Graph Convolution Encoder [3.749406324648861]
We propose an innovative approach that integrates existing 2D features with 3D features to guide the model learning process.
Our model is trained using 2D-3D data pairs from a combination of datasets and achieves state-of-the-art performance on the NoW benchmark.
arXiv Detail & Related papers (2024-03-08T11:09:46Z) - A Survey on 3D Gaussian Splatting [51.96747208581275]
3D Gaussian splatting (GS) has emerged as a transformative technique in the realm of explicit radiance field and computer graphics.
We provide the first systematic overview of the recent developments and critical contributions in the domain of 3D GS.
By enabling unprecedented rendering speed, 3D GS opens up a plethora of applications, ranging from virtual reality to interactive media and beyond.
arXiv Detail & Related papers (2024-01-08T13:42:59Z) - Robust Geometry-Preserving Depth Estimation Using Differentiable
Rendering [93.94371335579321]
We propose a learning framework that trains models to predict geometry-preserving depth without requiring extra data or annotations.
Comprehensive experiments underscore our framework's superior generalization capabilities.
Our innovative loss functions empower the model to autonomously recover domain-specific scale-and-shift coefficients.
arXiv Detail & Related papers (2023-09-18T12:36:39Z) - A Review of Deep Learning-Powered Mesh Reconstruction Methods [8.244104560094721]
Deep learning has enabled high-quality 3D shape reconstruction from various sources.
To be used in common 3D applications, reconstructed shapes need to be represented as polygonal meshes.
arXiv Detail & Related papers (2023-03-06T04:14:04Z) - Uncertainty Guided Policy for Active Robotic 3D Reconstruction using
Neural Radiance Fields [82.21033337949757]
This paper introduces a ray-based volumetric uncertainty estimator, which computes the entropy of the weight distribution of the color samples along each ray of the object's implicit neural representation.
We show that it is possible to infer the uncertainty of the underlying 3D geometry given a novel view with the proposed estimator.
We present a next-best-view selection policy guided by the ray-based volumetric uncertainty in neural radiance fields-based representations.
arXiv Detail & Related papers (2022-09-17T21:28:57Z) - Learning Geometry-Guided Depth via Projective Modeling for Monocular 3D Object Detection [70.71934539556916]
We learn geometry-guided depth estimation with projective modeling to advance monocular 3D object detection.
Specifically, a principled geometry formula with projective modeling of 2D and 3D depth predictions in the monocular 3D object detection network is devised.
Our method remarkably improves the detection performance of the state-of-the-art monocular-based method without extra data by 2.80% on the moderate test setting.
arXiv Detail & Related papers (2021-07-29T12:30:39Z) - Active 3D Shape Reconstruction from Vision and Touch [66.08432412497443]
Humans build 3D understandings of the world through active object exploration, using jointly their senses of vision and touch.
In 3D shape reconstruction, most recent progress has relied on static datasets of limited sensory data such as RGB images, depth maps or haptic readings.
We introduce a system composed of: 1) a haptic simulator leveraging high spatial resolution vision-based tactile sensors for active touching of 3D objects; 2) a mesh-based 3D shape reconstruction model that relies on tactile or visuotactile priors to guide the shape exploration; and 3) a set of data-driven solutions with either tactile or visuo
arXiv Detail & Related papers (2021-07-20T15:56:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.