Learning Neural Implicit through Volume Rendering with Attentive Depth
Fusion Priors
- URL: http://arxiv.org/abs/2310.11598v2
- Date: Mon, 8 Jan 2024 03:14:56 GMT
- Title: Learning Neural Implicit through Volume Rendering with Attentive Depth
Fusion Priors
- Authors: Pengchong Hu, Zhizhong Han
- Abstract summary: We learn neural implicit representations from multi-view RGBD images through volume rendering with an attentive depth fusion prior.
Our attention mechanism works with either a one-time fused TSDF that represents a whole scene or an incrementally fused TSDF that represents a partial scene.
Our evaluations on widely used benchmarks including synthetic and real-world scans show our superiority over the latest neural implicit methods.
- Score: 32.63878457242185
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning neural implicit representations has achieved remarkable performance
in 3D reconstruction from multi-view images. Current methods use volume
rendering to render implicit representations into either RGB or depth images
that are supervised by multi-view ground truth. However, rendering a view each
time suffers from incomplete depth at holes and unawareness of occluded
structures from the depth supervision, which severely affects the accuracy of
geometry inference via volume rendering. To resolve this issue, we propose to
learn neural implicit representations from multi-view RGBD images through
volume rendering with an attentive depth fusion prior. Our prior allows neural
networks to perceive coarse 3D structures from the Truncated Signed Distance
Function (TSDF) fused from all depth images available for rendering. The TSDF
enables accessing the missing depth at holes on one depth image and the
occluded parts that are invisible from the current view. By introducing a novel
attention mechanism, we allow neural networks to directly use the depth fusion
prior with the inferred occupancy as the learned implicit function. Our
attention mechanism works with either a one-time fused TSDF that represents a
whole scene or an incrementally fused TSDF that represents a partial scene in
the context of Simultaneous Localization and Mapping (SLAM). Our evaluations on
widely used benchmarks including synthetic and real-world scans show our
superiority over the latest neural implicit methods. Project page:
https://machineperceptionlab.github.io/Attentive_DF_Prior/
Related papers
- Pixel-Aligned Multi-View Generation with Depth Guided Decoder [86.1813201212539]
We propose a novel method for pixel-level image-to-multi-view generation.
Unlike prior work, we incorporate attention layers across multi-view images in the VAE decoder of a latent video diffusion model.
Our model enables better pixel alignment across multi-view images.
arXiv Detail & Related papers (2024-08-26T04:56:41Z) - A Two-Stage Masked Autoencoder Based Network for Indoor Depth Completion [10.519644854849098]
We propose a two-step Transformer-based network for indoor depth completion.
Our proposed network achieves the state-of-the-art performance on the Matterport3D dataset.
In addition, to validate the importance of the depth completion task, we apply our methods to indoor 3D reconstruction.
arXiv Detail & Related papers (2024-06-14T07:42:27Z) - Pyramid Deep Fusion Network for Two-Hand Reconstruction from RGB-D Images [11.100398985633754]
We propose an end-to-end framework for recovering dense meshes for both hands.
Our framework employs ResNet50 and PointNet++ to derive features from RGB and point cloud.
We also introduce a novel pyramid deep fusion network (PDFNet) to aggregate features at different scales.
arXiv Detail & Related papers (2023-07-12T09:33:21Z) - Multi-Plane Neural Radiance Fields for Novel View Synthesis [5.478764356647437]
Novel view synthesis is a long-standing problem that revolves around rendering frames of scenes from novel camera viewpoints.
In this work, we examine the performance, generalization, and efficiency of single-view multi-plane neural radiance fields.
We propose a new multiplane NeRF architecture that accepts multiple views to improve the synthesis results and expand the viewing range.
arXiv Detail & Related papers (2023-03-03T06:32:55Z) - Vision Transformer for NeRF-Based View Synthesis from a Single Input
Image [49.956005709863355]
We propose to leverage both the global and local features to form an expressive 3D representation.
To synthesize a novel view, we train a multilayer perceptron (MLP) network conditioned on the learned 3D representation to perform volume rendering.
Our method can render novel views from only a single input image and generalize across multiple object categories using a single model.
arXiv Detail & Related papers (2022-07-12T17:52:04Z) - 3DVNet: Multi-View Depth Prediction and Volumetric Refinement [68.68537312256144]
3DVNet is a novel multi-view stereo (MVS) depth-prediction method.
Our key idea is the use of a 3D scene-modeling network that iteratively updates a set of coarse depth predictions.
We show that our method exceeds state-of-the-art accuracy in both depth prediction and 3D reconstruction metrics.
arXiv Detail & Related papers (2021-12-01T00:52:42Z) - Light Field Networks: Neural Scene Representations with
Single-Evaluation Rendering [60.02806355570514]
Inferring representations of 3D scenes from 2D observations is a fundamental problem of computer graphics, computer vision, and artificial intelligence.
We propose a novel neural scene representation, Light Field Networks or LFNs, which represent both geometry and appearance of the underlying 3D scene in a 360-degree, four-dimensional light field.
Rendering a ray from an LFN requires only a *single* network evaluation, as opposed to hundreds of evaluations per ray for ray-marching or based on volumetrics.
arXiv Detail & Related papers (2021-06-04T17:54:49Z) - VR3Dense: Voxel Representation Learning for 3D Object Detection and
Monocular Dense Depth Reconstruction [0.951828574518325]
We introduce a method for jointly training 3D object detection and monocular dense depth reconstruction neural networks.
It takes as inputs, a LiDAR point-cloud, and a single RGB image during inference and produces object pose predictions as well as a densely reconstructed depth map.
While our object detection is trained in a supervised manner, the depth prediction network is trained with both self-supervised and supervised loss functions.
arXiv Detail & Related papers (2021-04-13T04:25:54Z) - NeuralFusion: Online Depth Fusion in Latent Space [77.59420353185355]
We present a novel online depth map fusion approach that learns depth map aggregation in a latent feature space.
Our approach is real-time capable, handles high noise levels, and is particularly able to deal with gross outliers common for photometric stereo-based depth maps.
arXiv Detail & Related papers (2020-11-30T13:50:59Z) - Depth Completion Using a View-constrained Deep Prior [73.21559000917554]
Recent work has shown that the structure of convolutional neural networks (CNNs) induces a strong prior that favors natural images.
This prior, known as a deep image prior (DIP), is an effective regularizer in inverse problems such as image denoising and inpainting.
We extend the concept of the DIP to depth images. Given color images and noisy and incomplete target depth maps, we reconstruct a depth map restored by virtue of using the CNN network structure as a prior.
arXiv Detail & Related papers (2020-01-21T21:56:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.