Gaussian Splatting: 3D Reconstruction and Novel View Synthesis, a Review
- URL: http://arxiv.org/abs/2405.03417v1
- Date: Mon, 6 May 2024 12:32:38 GMT
- Title: Gaussian Splatting: 3D Reconstruction and Novel View Synthesis, a Review
- Authors: Anurag Dalal, Daniel Hagen, Kjell G. Robbersmyr, Kristian Muri Knausgård,
- Abstract summary: This review paper focuses on state-of-the-art techniques for 3D reconstruction, including the generation of novel, unseen views.
An overview of recent developments in the Gaussian Splatting method is provided, covering input types, model structures, output representations, and training strategies.
- Score: 0.08823202672546056
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image-based 3D reconstruction is a challenging task that involves inferring the 3D shape of an object or scene from a set of input images. Learning-based methods have gained attention for their ability to directly estimate 3D shapes. This review paper focuses on state-of-the-art techniques for 3D reconstruction, including the generation of novel, unseen views. An overview of recent developments in the Gaussian Splatting method is provided, covering input types, model structures, output representations, and training strategies. Unresolved challenges and future directions are also discussed. Given the rapid progress in this domain and the numerous opportunities for enhancing 3D reconstruction methods, a comprehensive examination of algorithms appears essential. Consequently, this study offers a thorough overview of the latest advancements in Gaussian Splatting.
Related papers
- Frequency-based View Selection in Gaussian Splatting Reconstruction [9.603843571051744]
We investigate the problem of active view selection to perform 3D Gaussian Splatting reconstructions with as few input images as possible.
By ranking the potential views in the frequency domain, we are able to effectively estimate the potential information gain of new viewpoints.
Our method achieves state-of-the-art results in view selection, demonstrating its potential for efficient image-based 3D reconstruction.
arXiv Detail & Related papers (2024-09-24T21:44:26Z) - Bootstrap 3D Reconstructed Scenes from 3D Gaussian Splatting [10.06208115191838]
We present a bootstrapping method to enhance the rendering of novel views using trained 3D-GS.
Our results indicate that bootstrapping effectively reduces artifacts, as well as clear enhancements on the evaluation metrics.
arXiv Detail & Related papers (2024-04-29T12:57:05Z) - GeoGS3D: Single-view 3D Reconstruction via Geometric-aware Diffusion Model and Gaussian Splatting [81.03553265684184]
We introduce GeoGS3D, a framework for reconstructing detailed 3D objects from single-view images.
We propose a novel metric, Gaussian Divergence Significance (GDS), to prune unnecessary operations during optimization.
Experiments demonstrate that GeoGS3D generates images with high consistency across views and reconstructs high-quality 3D objects.
arXiv Detail & Related papers (2024-03-15T12:24:36Z) - IT3D: Improved Text-to-3D Generation with Explicit View Synthesis [71.68595192524843]
This study presents a novel strategy that leverages explicitly synthesized multi-view images to address these issues.
Our approach involves the utilization of image-to-image pipelines, empowered by LDMs, to generate posed high-quality images.
For the incorporated discriminator, the synthesized multi-view images are considered real data, while the renderings of the optimized 3D models function as fake data.
arXiv Detail & Related papers (2023-08-22T14:39:17Z) - CVRecon: Rethinking 3D Geometric Feature Learning For Neural
Reconstruction [12.53249207602695]
We propose an end-to-end 3D neural reconstruction framework CVRecon.
We exploit the rich geometric embedding in the cost volumes to facilitate 3D geometric feature learning.
arXiv Detail & Related papers (2023-04-28T05:30:19Z) - 3 Dimensional Dense Reconstruction: A Review of Algorithms and Dataset [19.7595986056387]
3D dense reconstruction refers to the process of obtaining the complete shape and texture features of 3D objects from 2D planar images.
This work systematically introduces classical methods of 3D dense reconstruction based on geometric and optical models.
It also introduces datasets for deep learning and the performance and advantages and disadvantages demonstrated by deep learning methods on these datasets.
arXiv Detail & Related papers (2023-04-19T01:56:55Z) - A Review of Deep Learning-Powered Mesh Reconstruction Methods [8.244104560094721]
Deep learning has enabled high-quality 3D shape reconstruction from various sources.
To be used in common 3D applications, reconstructed shapes need to be represented as polygonal meshes.
arXiv Detail & Related papers (2023-03-06T04:14:04Z) - High-fidelity 3D GAN Inversion by Pseudo-multi-view Optimization [51.878078860524795]
We present a high-fidelity 3D generative adversarial network (GAN) inversion framework that can synthesize photo-realistic novel views.
Our approach enables high-fidelity 3D rendering from a single image, which is promising for various applications of AI-generated 3D content.
arXiv Detail & Related papers (2022-11-28T18:59:52Z) - Uncertainty Guided Policy for Active Robotic 3D Reconstruction using
Neural Radiance Fields [82.21033337949757]
This paper introduces a ray-based volumetric uncertainty estimator, which computes the entropy of the weight distribution of the color samples along each ray of the object's implicit neural representation.
We show that it is possible to infer the uncertainty of the underlying 3D geometry given a novel view with the proposed estimator.
We present a next-best-view selection policy guided by the ray-based volumetric uncertainty in neural radiance fields-based representations.
arXiv Detail & Related papers (2022-09-17T21:28:57Z) - Black-Box Test-Time Shape REFINEment for Single View 3D Reconstruction [57.805334118057665]
We propose REFINE, a postprocessing mesh refinement step that can be easily integrated into the pipeline of any black-box method in the literature.
At test time, REFINE optimize a network per mesh instance, to encourage consistency between the mesh and the given object view.
arXiv Detail & Related papers (2021-08-23T03:28:47Z) - Model-based 3D Hand Reconstruction via Self-Supervised Learning [72.0817813032385]
Reconstructing a 3D hand from a single-view RGB image is challenging due to various hand configurations and depth ambiguity.
We propose S2HAND, a self-supervised 3D hand reconstruction network that can jointly estimate pose, shape, texture, and the camera viewpoint.
For the first time, we demonstrate the feasibility of training an accurate 3D hand reconstruction network without relying on manual annotations.
arXiv Detail & Related papers (2021-03-22T10:12:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.