Enhancing Surface Neural Implicits with Curvature-Guided Sampling and
Uncertainty-Augmented Representations
- URL: http://arxiv.org/abs/2306.02099v3
- Date: Tue, 12 Dec 2023 11:41:39 GMT
- Title: Enhancing Surface Neural Implicits with Curvature-Guided Sampling and
Uncertainty-Augmented Representations
- Authors: Lu Sang and Abhishek Saroha and Maolin Gao and Daniel Cremers
- Abstract summary: We introduce a sampling method with an uncertainty-augmented surface implicit representation that employs a sampling technique that considers the geometric characteristics of inputs.
We demonstrate that our method leads to state-of-the-art reconstructions on both synthetic and real-world data.
- Score: 40.885487788615855
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Neural implicits have become popular for representing surfaces because they
offer an adaptive resolution and support arbitrary topologies. While previous
works rely on ground truth point clouds, they often ignore the effect of input
quality and sampling methods during reconstructing process. In this paper, we
introduce a sampling method with an uncertainty-augmented surface implicit
representation that employs a sampling technique that considers the geometric
characteristics of inputs. To this end, we introduce a strategy that
efficiently computes differentiable geometric features, namely, mean
curvatures, to augment the sampling phase during the training period. The
uncertainty augmentation offers insights into the occupancy and reliability of
the output signed distance value, thereby expanding representation capabilities
into open surfaces. Finally, we demonstrate that our method leads to
state-of-the-art reconstructions on both synthetic and real-world data.
Related papers
- FouriScale: A Frequency Perspective on Training-Free High-Resolution Image Synthesis [48.9652334528436]
We introduce an innovative, training-free approach FouriScale from the perspective of frequency domain analysis.
We replace the original convolutional layers in pre-trained diffusion models by incorporating a dilation technique along with a low-pass operation.
Our method successfully balances the structural integrity and fidelity of generated images, achieving an astonishing capacity of arbitrary-size, high-resolution, and high-quality generation.
arXiv Detail & Related papers (2024-03-19T17:59:33Z) - DNS SLAM: Dense Neural Semantic-Informed SLAM [92.39687553022605]
DNS SLAM is a novel neural RGB-D semantic SLAM approach featuring a hybrid representation.
Our method integrates multi-view geometry constraints with image-based feature extraction to improve appearance details.
Our experimental results achieve state-of-the-art performance on both synthetic data and real-world data tracking.
arXiv Detail & Related papers (2023-11-30T21:34:44Z) - UMat: Uncertainty-Aware Single Image High Resolution Material Capture [2.416160525187799]
We propose a learning-based method to recover normals, specularity, and roughness from a single diffuse image of a material.
Our method is the first one to deal with the problem of modeling uncertainty in material digitization.
arXiv Detail & Related papers (2023-05-25T17:59:04Z) - Neural 3D Reconstruction in the Wild [86.6264706256377]
We introduce a new method that enables efficient and accurate surface reconstruction from Internet photo collections.
We present a new benchmark and protocol for evaluating reconstruction performance on such in-the-wild scenes.
arXiv Detail & Related papers (2022-05-25T17:59:53Z) - Leveraging Equivariant Features for Absolute Pose Regression [9.30597356471664]
We show that a translation and rotation equivariant Convolutional Neural Network directly induces representations of camera motions into the feature space.
We then show that this geometric property allows for implicitly augmenting the training data under a whole group of image plane-preserving transformations.
arXiv Detail & Related papers (2022-04-05T12:44:20Z) - Neural BRDF Representation and Importance Sampling [79.84316447473873]
We present a compact neural network-based representation of reflectance BRDF data.
We encode BRDFs as lightweight networks, and propose a training scheme with adaptive angular sampling.
We evaluate encoding results on isotropic and anisotropic BRDFs from multiple real-world datasets.
arXiv Detail & Related papers (2021-02-11T12:00:24Z) - Deep Magnification-Flexible Upsampling over 3D Point Clouds [103.09504572409449]
We propose a novel end-to-end learning-based framework to generate dense point clouds.
We first formulate the problem explicitly, which boils down to determining the weights and high-order approximation errors.
Then, we design a lightweight neural network to adaptively learn unified and sorted weights as well as the high-order refinements.
arXiv Detail & Related papers (2020-11-25T14:00:18Z) - Monocular Real-Time Volumetric Performance Capture [28.481131687883256]
We present the first approach to volumetric performance capture and novel-view rendering at real-time speed from monocular video.
Our system reconstructs a fully textured 3D human from each frame by leveraging Pixel-Aligned Implicit Function (PIFu)
We also introduce an Online Hard Example Mining (OHEM) technique that effectively suppresses failure modes due to the rare occurrence of challenging examples.
arXiv Detail & Related papers (2020-07-28T04:45:13Z) - Deep Manifold Prior [37.725563645899584]
We present a prior for manifold structured data, such as surfaces of 3D shapes, where deep neural networks are adopted to reconstruct a target shape using gradient descent.
We show that surfaces generated this way are smooth, with limiting behavior characterized by Gaussian processes, and we mathematically derive such properties for fully-connected as well as convolutional networks.
arXiv Detail & Related papers (2020-04-08T20:47:56Z) - Deep Non-Line-of-Sight Reconstruction [18.38481917675749]
In this paper, we employ convolutional feed-forward networks for solving the reconstruction problem efficiently.
We devise a tailored autoencoder architecture, trained end-to-end reconstruction maps transient images directly to a depth map representation.
We demonstrate that our feed-forward network, even though it is trained solely on synthetic data, generalizes to measured data from SPAD sensors and is able to obtain results that are competitive with model-based reconstruction methods.
arXiv Detail & Related papers (2020-01-24T16:05:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.