Better Patch Stitching for Parametric Surface Reconstruction
- URL: http://arxiv.org/abs/2010.07021v1
- Date: Wed, 14 Oct 2020 12:37:57 GMT
- Title: Better Patch Stitching for Parametric Surface Reconstruction
- Authors: Zhantao Deng, Jan Bedna\v{r}\'ik, Mathieu Salzmann, Pascal Fua
- Abstract summary: We introduce an approach that explicitly encourages global consistency of the local mappings.
The first term exploits the surface normals and requires that they remain locally consistent when estimated within and across the individual mappings.
The second term further encourages better spatial configuration of the mappings by minimizing novel stitching error.
- Score: 100.55842629739574
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, parametric mappings have emerged as highly effective surface
representations, yielding low reconstruction error. In particular, the latest
works represent the target shape as an atlas of multiple mappings, which can
closely encode object parts. Atlas representations, however, suffer from one
major drawback: The individual mappings are not guaranteed to be consistent,
which results in holes in the reconstructed shape or in jagged surface areas.
We introduce an approach that explicitly encourages global consistency of the
local mappings. To this end, we introduce two novel loss terms. The first term
exploits the surface normals and requires that they remain locally consistent
when estimated within and across the individual mappings. The second term
further encourages better spatial configuration of the mappings by minimizing
novel stitching error. We show on standard benchmarks that the use of normal
consistency requirement outperforms the baselines quantitatively while
enforcing better stitching leads to much better visual quality of the
reconstructed objects as compared to the state-of-the-art.
Related papers
- RaNeuS: Ray-adaptive Neural Surface Reconstruction [87.20343320266215]
We leverage a differentiable radiance field eg NeRF to reconstruct detailed 3D surfaces in addition to producing novel view renderings.
Considering that different methods formulate and optimize the projection from SDF to radiance field with a globally constant Eikonal regularization, we improve with a ray-wise weighting factor.
Our proposed textitRaNeuS are extensively evaluated on both synthetic and real datasets.
arXiv Detail & Related papers (2024-06-14T07:54:25Z) - NeuSurf: On-Surface Priors for Neural Surface Reconstruction from Sparse
Input Views [41.03837477483364]
We propose a novel sparse view reconstruction framework that leverages on-surface priors to achieve highly faithful surface reconstruction.
Specifically, we design several constraints on global geometry alignment and local geometry refinement for jointly optimizing coarse shapes and fine details.
The experimental results with DTU and BlendedMVS datasets in two prevalent sparse settings demonstrate significant improvements over the state-of-the-art methods.
arXiv Detail & Related papers (2023-12-21T16:04:45Z) - PRS: Sharp Feature Priors for Resolution-Free Surface Remeshing [30.28380889862059]
We present a data-driven approach for automatic feature detection and remeshing.
Our algorithm improves over state-of-the-art by 26% normals F-score and 42% perceptual $textRMSE_textv$.
arXiv Detail & Related papers (2023-11-30T12:15:45Z) - Indoor Scene Reconstruction with Fine-Grained Details Using Hybrid Representation and Normal Prior Enhancement [50.56517624931987]
The reconstruction of indoor scenes from multi-view RGB images is challenging due to the coexistence of flat and texture-less regions.
Recent methods leverage neural radiance fields aided by predicted surface normal priors to recover the scene geometry.
This work aims to reconstruct high-fidelity surfaces with fine-grained details by addressing the above limitations.
arXiv Detail & Related papers (2023-09-14T12:05:29Z) - Critical Regularizations for Neural Surface Reconstruction in the Wild [26.460011241432092]
We present RegSDF, which shows that proper point cloud supervisions and geometry regularizations are sufficient to produce high-quality and robust reconstruction results.
RegSDF is able to reconstruct surfaces with fine details even for open scenes with complex topologies and unstructured camera trajectories.
arXiv Detail & Related papers (2022-06-07T08:11:22Z) - Improving neural implicit surfaces geometry with patch warping [12.106051690920266]
We argue that this comes from the difficulty to learn and render high frequency textures with neural networks.
We propose to add to the standard neural rendering optimization a direct photo-consistency term across the different views.
We evaluate our approach, dubbed NeuralWarp, on the standard DTU and EPFL benchmarks and show it outperforms state of the art unsupervised implicit surfaces reconstructions by over 20% on both datasets.
arXiv Detail & Related papers (2021-12-17T17:43:50Z) - CAMERAS: Enhanced Resolution And Sanity preserving Class Activation
Mapping for image saliency [61.40511574314069]
Backpropagation image saliency aims at explaining model predictions by estimating model-centric importance of individual pixels in the input.
We propose CAMERAS, a technique to compute high-fidelity backpropagation saliency maps without requiring any external priors.
arXiv Detail & Related papers (2021-06-20T08:20:56Z) - Sign-Agnostic CONet: Learning Implicit Surface Reconstructions by
Sign-Agnostic Optimization of Convolutional Occupancy Networks [39.65056638604885]
We learn implicit surface reconstruction by sign-agnostic optimization of convolutional occupancy networks.
We show this goal can be effectively achieved by a simple yet effective design.
arXiv Detail & Related papers (2021-05-08T03:35:32Z) - 3D Dense Geometry-Guided Facial Expression Synthesis by Adversarial
Learning [54.24887282693925]
We propose a novel framework to exploit 3D dense (depth and surface normals) information for expression manipulation.
We use an off-the-shelf state-of-the-art 3D reconstruction model to estimate the depth and create a large-scale RGB-Depth dataset.
Our experiments demonstrate that the proposed method outperforms the competitive baseline and existing arts by a large margin.
arXiv Detail & Related papers (2020-09-30T17:12:35Z) - Point2Mesh: A Self-Prior for Deformable Meshes [83.31236364265403]
We introduce Point2Mesh, a technique for reconstructing a surface mesh from an input point cloud.
The self-prior encapsulates reoccurring geometric repetitions from a single shape within the weights of a deep neural network.
We show that Point2Mesh converges to a desirable solution; compared to a prescribed smoothness prior, which often becomes trapped in undesirable local minima.
arXiv Detail & Related papers (2020-05-22T10:01:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.