Hi-Map: Hierarchical Factorized Radiance Field for High-Fidelity
Monocular Dense Mapping
- URL: http://arxiv.org/abs/2401.03203v1
- Date: Sat, 6 Jan 2024 12:32:25 GMT
- Title: Hi-Map: Hierarchical Factorized Radiance Field for High-Fidelity
Monocular Dense Mapping
- Authors: Tongyan Hua, Haotian Bai, Zidong Cao, Ming Liu, Dacheng Tao and Lin
Wang
- Abstract summary: We introduce Hi-Map, a novel monocular dense mapping approach based on Neural Radiance Field (NeRF)
Hi-Map is exceptional in its capacity to achieve efficient and high-fidelity mapping using only posed RGB inputs.
- Score: 51.739466714312805
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we introduce Hi-Map, a novel monocular dense mapping approach
based on Neural Radiance Field (NeRF). Hi-Map is exceptional in its capacity to
achieve efficient and high-fidelity mapping using only posed RGB inputs. Our
method eliminates the need for external depth priors derived from e.g., a depth
estimation model. Our key idea is to represent the scene as a hierarchical
feature grid that encodes the radiance and then factorizes it into feature
planes and vectors. As such, the scene representation becomes simpler and more
generalizable for fast and smooth convergence on new observations. This allows
for efficient computation while alleviating noise patterns by reducing the
complexity of the scene representation. Buttressed by the hierarchical
factorized representation, we leverage the Sign Distance Field (SDF) as a proxy
of rendering for inferring the volume density, demonstrating high mapping
fidelity. Moreover, we introduce a dual-path encoding strategy to strengthen
the photometric cues and further boost the mapping quality, especially for the
distant and textureless regions. Extensive experiments demonstrate our method's
superiority in geometric and textural accuracy over the state-of-the-art
NeRF-based monocular mapping methods.
Related papers
- PINGS: Gaussian Splatting Meets Distance Fields within a Point-Based Implicit Neural Map [30.06864329412246]
We propose a novel map representation that unifies a continuous signed distance field and a Gaussian splatting radiance field within an elastic and compact point-based implicit neural map.
We devise a LiDAR-visual SLAM system called PINGS using the proposed map representation and evaluate it on several challenging large-scale datasets.
arXiv Detail & Related papers (2025-02-09T03:06:19Z) - Ternary-Type Opacity and Hybrid Odometry for RGB NeRF-SLAM [58.736472371951955]
We introduce a ternary-type opacity (TT) model, which categorizes points on a ray intersecting a surface into three regions: before, on, and behind the surface.
This enables a more accurate rendering of depth, subsequently improving the performance of image warping techniques.
Our integrated approach of TT and HO achieves state-of-the-art performance on synthetic and real-world datasets.
arXiv Detail & Related papers (2023-12-20T18:03:17Z) - FMapping: Factorized Efficient Neural Field Mapping for Real-Time Dense
RGB SLAM [3.6985351289638957]
We introduce FMapping, an efficient neural field mapping framework that facilitates the continuous estimation of a colorized point cloud map in real-time dense RGB SLAM.
We propose an effective factorization scheme for scene representation and introduce a sliding window strategy to reduce the uncertainty for scene reconstruction.
arXiv Detail & Related papers (2023-06-01T11:51:46Z) - Fast Monocular Scene Reconstruction with Global-Sparse Local-Dense Grids [84.90863397388776]
We propose to directly use signed distance function (SDF) in sparse voxel block grids for fast and accurate scene reconstruction without distances.
Our globally sparse and locally dense data structure exploits surfaces' spatial sparsity, enables cache-friendly queries, and allows direct extensions to multi-modal data.
Experiments show that our approach is 10x faster in training and 100x faster in rendering while achieving comparable accuracy to state-of-the-art neural implicit methods.
arXiv Detail & Related papers (2023-05-22T16:50:19Z) - Multiscale Representation for Real-Time Anti-Aliasing Neural Rendering [84.37776381343662]
Mip-NeRF proposes a multiscale representation as a conical frustum to encode scale information.
We propose mip voxel grids (Mip-VoG), an explicit multiscale representation for real-time anti-aliasing rendering.
Our approach is the first to offer multiscale training and real-time anti-aliasing rendering simultaneously.
arXiv Detail & Related papers (2023-04-20T04:05:22Z) - Learning Continuous Depth Representation via Geometric Spatial
Aggregator [47.1698365486215]
We propose a novel continuous depth representation for depth map super-resolution (DSR)
The heart of this representation is our proposed Geometric Spatial Aggregator (GSA), which exploits a distance field modulated by arbitrarily upsampled target gridding.
We also present a transformer-style backbone named GeoDSR, which possesses a principled way to construct the functional mapping between local coordinates.
arXiv Detail & Related papers (2022-12-07T07:48:23Z) - DARF: Depth-Aware Generalizable Neural Radiance Field [51.29437249009986]
We propose the Depth-Aware Generalizable Neural Radiance Field (DARF) with a Depth-Aware Dynamic Sampling (DADS) strategy.
Our framework infers the unseen scenes on both pixel level and geometry level with only a few input images.
Compared with state-of-the-art generalizable NeRF methods, DARF reduces samples by 50%, while improving rendering quality and depth estimation.
arXiv Detail & Related papers (2022-12-05T14:00:59Z) - CAMERAS: Enhanced Resolution And Sanity preserving Class Activation
Mapping for image saliency [61.40511574314069]
Backpropagation image saliency aims at explaining model predictions by estimating model-centric importance of individual pixels in the input.
We propose CAMERAS, a technique to compute high-fidelity backpropagation saliency maps without requiring any external priors.
arXiv Detail & Related papers (2021-06-20T08:20:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.