DeepSurfels: Learning Online Appearance Fusion
- URL: http://arxiv.org/abs/2012.14240v1
- Date: Mon, 28 Dec 2020 14:13:33 GMT
- Title: DeepSurfels: Learning Online Appearance Fusion
- Authors: Marko Mihajlovic, Silvan Weder, Marc Pollefeys, Martin R. Oswald
- Abstract summary: DeepSurfels is a novel hybrid scene representation for geometry and appearance information.
In contrast to established representations, DeepSurfels better represents high-frequency textures.
We present an end-to-end trainable online appearance fusion pipeline.
- Score: 77.59420353185355
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present DeepSurfels, a novel hybrid scene representation for geometry and
appearance information. DeepSurfels combines explicit and neural building
blocks to jointly encode geometry and appearance information. In contrast to
established representations, DeepSurfels better represents high-frequency
textures, is well-suited for online updates of appearance information, and can
be easily combined with machine learning methods. We further present an
end-to-end trainable online appearance fusion pipeline that fuses information
provided by RGB images into the proposed scene representation and is trained
using self-supervision imposed by the reprojection error with respect to the
input images. Our method compares favorably to classical texture mapping
approaches as well as recently proposed learning-based techniques. Moreover, we
demonstrate lower runtime, improved generalization capabilities, and better
scalability to larger scenes compared to existing methods.
Related papers
- Fine-Grained Multi-View Hand Reconstruction Using Inverse Rendering [11.228453237603834]
We present a novel fine-grained multi-view hand mesh reconstruction method that leverages inverse rendering to restore hand poses and intricate details.
We also introduce a novel Hand Albedo and Mesh (HAM) optimization module to refine both the hand mesh and textures.
Our proposed approach outperforms the state-of-the-art methods on both reconstruction accuracy and rendering quality.
arXiv Detail & Related papers (2024-07-08T07:28:24Z) - HR Human: Modeling Human Avatars with Triangular Mesh and High-Resolution Textures from Videos [52.23323966700072]
We present a framework for acquiring human avatars that are attached with high-resolution physically-based material textures and mesh from monocular video.
Our method introduces a novel information fusion strategy to combine the information from the monocular video and synthesize virtual multi-view images.
Experiments show that our approach outperforms previous representations in terms of high fidelity, and this explicit result supports deployment on common triangulars.
arXiv Detail & Related papers (2024-05-18T11:49:09Z) - DNS SLAM: Dense Neural Semantic-Informed SLAM [92.39687553022605]
DNS SLAM is a novel neural RGB-D semantic SLAM approach featuring a hybrid representation.
Our method integrates multi-view geometry constraints with image-based feature extraction to improve appearance details.
Our experimental results achieve state-of-the-art performance on both synthetic data and real-world data tracking.
arXiv Detail & Related papers (2023-11-30T21:34:44Z) - Indoor Scene Reconstruction with Fine-Grained Details Using Hybrid Representation and Normal Prior Enhancement [50.56517624931987]
The reconstruction of indoor scenes from multi-view RGB images is challenging due to the coexistence of flat and texture-less regions.
Recent methods leverage neural radiance fields aided by predicted surface normal priors to recover the scene geometry.
This work aims to reconstruct high-fidelity surfaces with fine-grained details by addressing the above limitations.
arXiv Detail & Related papers (2023-09-14T12:05:29Z) - HelixSurf: A Robust and Efficient Neural Implicit Surface Learning of
Indoor Scenes with Iterative Intertwined Regularization [41.2417324078429]
We propose a method termed Helix-shaped neural implicit Surface learning or HelixSurf.
HelixSurf uses the intermediate prediction from one strategy as the guidance to regularize the learning of the other one.
Experiments on surface reconstruction of indoor scenes show that our method compares favorably with existing methods.
arXiv Detail & Related papers (2023-02-28T06:20:07Z) - Efficient Textured Mesh Recovery from Multiple Views with Differentiable
Rendering [8.264851594332677]
We propose an efficient coarse-to-fine approach to recover the textured mesh from multi-view images.
We optimize the shape geometry by minimizing the difference between the rendered mesh with the depth predicted by the learning-based multi-view stereo algorithm.
In contrast to the implicit neural representation on shape and color, we introduce a physically based inverse rendering scheme to jointly estimate the lighting and reflectance of the objects.
arXiv Detail & Related papers (2022-05-25T03:33:55Z) - Pan-sharpening via High-pass Modification Convolutional Neural Network [39.295436779920465]
We propose a novel pan-sharpening convolutional neural network based on a high-pass modification block.
The proposed block is designed to learn the high-pass information, leading to enhance spatial information in each band of the multi-spectral-resolution images.
Experiments demonstrate the superior performance of the proposed method compared to the state-of-the-art pan-sharpening methods.
arXiv Detail & Related papers (2021-05-24T23:39:04Z) - Neural RGB-D Surface Reconstruction [15.438678277705424]
Methods which learn a neural radiance field have shown amazing image synthesis results, but the underlying geometry representation is only a coarse approximation of the real geometry.
We demonstrate how depth measurements can be incorporated into the radiance field formulation to produce more detailed and complete reconstruction results.
arXiv Detail & Related papers (2021-04-09T18:00:01Z) - NeuralFusion: Online Depth Fusion in Latent Space [77.59420353185355]
We present a novel online depth map fusion approach that learns depth map aggregation in a latent feature space.
Our approach is real-time capable, handles high noise levels, and is particularly able to deal with gross outliers common for photometric stereo-based depth maps.
arXiv Detail & Related papers (2020-11-30T13:50:59Z) - RoutedFusion: Learning Real-time Depth Map Fusion [73.0378509030908]
We present a novel real-time capable machine learning-based method for depth map fusion.
We propose a neural network that predicts non-linear updates to better account for typical fusion errors.
Our network is composed of a 2D depth routing network and a 3D depth fusion network which efficiently handle sensor-specific noise and outliers.
arXiv Detail & Related papers (2020-01-13T16:46:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.