Neural Mesh Fusion: Unsupervised 3D Planar Surface Understanding
- URL: http://arxiv.org/abs/2402.16739v1
- Date: Mon, 26 Feb 2024 17:02:30 GMT
- Title: Neural Mesh Fusion: Unsupervised 3D Planar Surface Understanding
- Authors: Farhad G. Zanjani, Hong Cai, Yinhao Zhu, Leyla Mirvakhabova, Fatih
Porikli
- Abstract summary: Neural Mesh Fusion (NMF) is an efficient approach for joint optimization of polygon mesh from multi-view image observations.
NMF is significantly more computationally efficient compared to implicit neural rendering-based scene reconstruction approaches.
- Score: 45.33209870505472
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper presents Neural Mesh Fusion (NMF), an efficient approach for joint
optimization of polygon mesh from multi-view image observations and
unsupervised 3D planar-surface parsing of the scene. In contrast to implicit
neural representations, NMF directly learns to deform surface triangle mesh and
generate an embedding for unsupervised 3D planar segmentation through
gradient-based optimization directly on the surface mesh. The conducted
experiments show that NMF obtains competitive results compared to
state-of-the-art multi-view planar reconstruction, while not requiring any
ground-truth 3D or planar supervision. Moreover, NMF is significantly more
computationally efficient compared to implicit neural rendering-based scene
reconstruction approaches.
Related papers
- MRIo3DS-Net: A Mutually Reinforcing Images to 3D Surface RNN-like framework for model-adaptation indoor 3D reconstruction [6.763241608372683]
This paper proposes an end-to-end framework of mutually reinforcing images to 3D surface recurrent neural network-like for model-adaptation indoor 3D reconstruction.
arXiv Detail & Related papers (2024-07-16T06:46:57Z) - Reconstructing and Simulating Dynamic 3D Objects with Mesh-adsorbed Gaussian Splatting [26.382349137191547]
This paper introduces the Mesh-adsorbed Gaussian Splatting (MaGS) method to resolve such a dilemma.
MaGS constrains 3D Gaussians to hover on the mesh surface, creating a mutual-adsorbed mesh-Gaussian 3D representation.
By joint optimizing meshes, 3D Gaussians, and RDF, MaGS achieves both high rendering accuracy and realistic deformation.
arXiv Detail & Related papers (2024-06-03T17:59:51Z) - NEAT: Distilling 3D Wireframes from Neural Attraction Fields [52.90572335390092]
This paper studies the problem of structured lineframe junctions using 3D reconstruction segments andFocusing junctions.
ProjectNEAT enjoys the joint neural fields and view without crossart matching from scratch.
arXiv Detail & Related papers (2023-07-14T07:25:47Z) - Self-supervised Pre-training with Masked Shape Prediction for 3D Scene
Understanding [106.0876425365599]
Masked Shape Prediction (MSP) is a new framework to conduct masked signal modeling in 3D scenes.
MSP uses the essential 3D semantic cue, i.e., geometric shape, as the prediction target for masked points.
arXiv Detail & Related papers (2023-05-08T20:09:19Z) - Learning Neural Radiance Fields from Multi-View Geometry [1.1011268090482573]
We present a framework, called MVG-NeRF, that combines Multi-View Geometry algorithms and Neural Radiance Fields (NeRF) for image-based 3D reconstruction.
NeRF has revolutionized the field of implicit 3D representations, mainly due to a differentiable rendering formulation that enables high-quality and geometry-aware novel view synthesis.
arXiv Detail & Related papers (2022-10-24T08:53:35Z) - NeurMiPs: Neural Mixture of Planar Experts for View Synthesis [49.25559264261876]
NeurMiPs is a novel planar-based scene representation for modeling geometry and appearance.
We render novel views by calculating ray-plane intersections and composite output colors and densities at intersected points to the image.
Experiments demonstrate superior performance and speed of our proposed method, compared to other 3D representations in novel view synthesis.
arXiv Detail & Related papers (2022-04-28T17:59:41Z) - Geometry-Contrastive Transformer for Generalized 3D Pose Transfer [95.56457218144983]
The intuition of this work is to perceive the geometric inconsistency between the given meshes with the powerful self-attention mechanism.
We propose a novel geometry-contrastive Transformer that has an efficient 3D structured perceiving ability to the global geometric inconsistencies.
We present a latent isometric regularization module together with a novel semi-synthesized dataset for the cross-dataset 3D pose transfer task.
arXiv Detail & Related papers (2021-12-14T13:14:24Z) - Implicit Neural Deformation for Multi-View Face Reconstruction [43.88676778013593]
We present a new method for 3D face reconstruction from multi-view RGB images.
Unlike previous methods which are built upon 3D morphable models, our method leverages an implicit representation to encode rich geometric features.
Our experimental results on several benchmark datasets demonstrate that our approach outperforms alternative baselines and achieves superior face reconstruction results compared to state-of-the-art methods.
arXiv Detail & Related papers (2021-12-05T07:02:53Z) - 3DVNet: Multi-View Depth Prediction and Volumetric Refinement [68.68537312256144]
3DVNet is a novel multi-view stereo (MVS) depth-prediction method.
Our key idea is the use of a 3D scene-modeling network that iteratively updates a set of coarse depth predictions.
We show that our method exceeds state-of-the-art accuracy in both depth prediction and 3D reconstruction metrics.
arXiv Detail & Related papers (2021-12-01T00:52:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.