Efficient Implicit Neural Reconstruction Using LiDAR
- URL: http://arxiv.org/abs/2302.14363v1
- Date: Tue, 28 Feb 2023 07:31:48 GMT
- Title: Efficient Implicit Neural Reconstruction Using LiDAR
- Authors: Dongyu Yan, Xiaoyang Lyu, Jieqi Shi and Yi Lin
- Abstract summary: We propose a new method that uses sparse LiDAR point clouds and rough odometry to reconstruct fine-grained implicit occupancy field efficiently within a few minutes.
As far as we know, our method is the first to reconstruct implicit scene representation from LiDAR-only input.
- Score: 6.516471975863534
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modeling scene geometry using implicit neural representation has revealed its
advantages in accuracy, flexibility, and low memory usage. Previous approaches
have demonstrated impressive results using color or depth images but still have
difficulty handling poor light conditions and large-scale scenes. Methods
taking global point cloud as input require accurate registration and ground
truth coordinate labels, which limits their application scenarios. In this
paper, we propose a new method that uses sparse LiDAR point clouds and rough
odometry to reconstruct fine-grained implicit occupancy field efficiently
within a few minutes. We introduce a new loss function that supervises directly
in 3D space without 2D rendering, avoiding information loss. We also manage to
refine poses of input frames in an end-to-end manner, creating consistent
geometry without global point cloud registration. As far as we know, our method
is the first to reconstruct implicit scene representation from LiDAR-only
input. Experiments on synthetic and real-world datasets, including indoor and
outdoor scenes, prove that our method is effective, efficient, and accurate,
obtaining comparable results with existing methods using dense input.
Related papers
- NeuraLoc: Visual Localization in Neural Implicit Map with Dual Complementary Features [50.212836834889146]
We propose an efficient and novel visual localization approach based on the neural implicit map with complementary features.
Specifically, to enforce geometric constraints and reduce storage requirements, we implicitly learn a 3D keypoint descriptor field.
To further address the semantic ambiguity of descriptors, we introduce additional semantic contextual feature fields.
arXiv Detail & Related papers (2025-03-08T08:04:27Z) - Real-time Neural Rendering of LiDAR Point Clouds [0.2621434923709917]
A naive projection of the point cloud to the output view using 1x1 pixels is fast and retains the available detail, but also results in unintelligible renderings as background points leak in between the foreground pixels.
A deep convolutional model in the form of a U-Net is used to transform these projections into a realistic result.
We also describe a method to generate synthetic training data to deal with imperfectly-aligned ground truth images.
arXiv Detail & Related papers (2025-02-17T10:01:13Z) - Uni-SLAM: Uncertainty-Aware Neural Implicit SLAM for Real-Time Dense Indoor Scene Reconstruction [11.714682609560278]
We propose Uni-SLAM, a decoupled 3D spatial representation based on hash grids for indoor reconstruction.
Experiments on synthetic and real-world datasets demonstrate that our system achieves state-of-the-art tracking and mapping accuracy.
arXiv Detail & Related papers (2024-11-29T20:16:58Z) - No Pose, No Problem: Surprisingly Simple 3D Gaussian Splats from Sparse Unposed Images [100.80376573969045]
NoPoSplat is a feed-forward model capable of reconstructing 3D scenes parameterized by 3D Gaussians from multi-view images.
Our model achieves real-time 3D Gaussian reconstruction during inference.
This work makes significant advances in pose-free generalizable 3D reconstruction and demonstrates its applicability to real-world scenarios.
arXiv Detail & Related papers (2024-10-31T17:58:22Z) - Inverse Neural Rendering for Explainable Multi-Object Tracking [35.072142773300655]
We recast 3D multi-object tracking from RGB cameras as an emphInverse Rendering (IR) problem.
We optimize an image loss over generative latent spaces that inherently disentangle shape and appearance properties.
We validate the generalization and scaling capabilities of our method by learning the generative prior exclusively from synthetic data.
arXiv Detail & Related papers (2024-04-18T17:37:53Z) - Unsupervised Occupancy Learning from Sparse Point Cloud [8.732260277121547]
Implicit Neural Representations have gained prominence as a powerful framework for capturing complex data modalities.
In this paper, we propose a method to infer occupancy fields instead of Neural Signed Distance Functions.
We highlight its capacity to improve implicit shape inference with respect to baselines and the state-of-the-art using synthetic and real data.
arXiv Detail & Related papers (2024-04-03T14:05:39Z) - DNS SLAM: Dense Neural Semantic-Informed SLAM [92.39687553022605]
DNS SLAM is a novel neural RGB-D semantic SLAM approach featuring a hybrid representation.
Our method integrates multi-view geometry constraints with image-based feature extraction to improve appearance details.
Our experimental results achieve state-of-the-art performance on both synthetic data and real-world data tracking.
arXiv Detail & Related papers (2023-11-30T21:34:44Z) - DELFlow: Dense Efficient Learning of Scene Flow for Large-Scale Point
Clouds [42.64433313672884]
We regularize raw points to a dense format by storing 3D coordinates in 2D grids.
Unlike the sampling operation commonly used in existing works, the dense 2D representation preserves most points.
We also present a novel warping projection technique to alleviate the information loss problem.
arXiv Detail & Related papers (2023-08-08T16:37:24Z) - Quadric Representations for LiDAR Odometry, Mapping and Localization [93.24140840537912]
Current LiDAR odometry, mapping and localization methods leverage point-wise representations of 3D scenes.
We propose a novel method of describing scenes using quadric surfaces, which are far more compact representations of 3D objects.
Our method maintains low latency and memory utility while achieving competitive, and even superior, accuracy.
arXiv Detail & Related papers (2023-04-27T13:52:01Z) - RCP: Recurrent Closest Point for Scene Flow Estimation on 3D Point
Clouds [44.034836961967144]
3D motion estimation including scene flow and point cloud registration has drawn increasing interest.
Recent methods employ deep neural networks to construct the cost volume for estimating accurate 3D flow.
We decompose the problem into two interlaced stages, where the 3D flows are optimized point-wisely at the first stage and then globally regularized in a recurrent network at the second stage.
arXiv Detail & Related papers (2022-05-23T04:04:30Z) - Revisiting Point Cloud Simplification: A Learnable Feature Preserving
Approach [57.67932970472768]
Mesh and Point Cloud simplification methods aim to reduce the complexity of 3D models while retaining visual quality and relevant salient features.
We propose a fast point cloud simplification method by learning to sample salient points.
The proposed method relies on a graph neural network architecture trained to select an arbitrary, user-defined, number of points from the input space and to re-arrange their positions so as to minimize the visual perception error.
arXiv Detail & Related papers (2021-09-30T10:23:55Z) - Lifting 2D Object Locations to 3D by Discounting LiDAR Outliers across
Objects and Views [70.1586005070678]
We present a system for automatically converting 2D mask object predictions and raw LiDAR point clouds into full 3D bounding boxes of objects.
Our method significantly outperforms previous work despite the fact that those methods use significantly more complex pipelines, 3D models and additional human-annotated external sources of prior information.
arXiv Detail & Related papers (2021-09-16T13:01:13Z) - RandomRooms: Unsupervised Pre-training from Synthetic Shapes and
Randomized Layouts for 3D Object Detection [138.2892824662943]
A promising solution is to make better use of the synthetic dataset, which consists of CAD object models, to boost the learning on real datasets.
Recent work on 3D pre-training exhibits failure when transfer features learned on synthetic objects to other real-world applications.
In this work, we put forward a new method called RandomRooms to accomplish this objective.
arXiv Detail & Related papers (2021-08-17T17:56:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.