LISNeRF Mapping: LiDAR-based Implicit Mapping via Semantic Neural Fields for Large-Scale 3D Scenes
- URL: http://arxiv.org/abs/2311.02313v2
- Date: Wed, 20 Mar 2024 04:45:17 GMT
- Title: LISNeRF Mapping: LiDAR-based Implicit Mapping via Semantic Neural Fields for Large-Scale 3D Scenes
- Authors: Jianyuan Zhang, Zhiliu Yang, Meng Zhang,
- Abstract summary: Large-scale semantic mapping is crucial for outdoor autonomous agents to fulfill high-level tasks such as planning and navigation.
This paper proposes a novel method for large-scale 3D semantic reconstruction through implicit representations from posed LiDAR measurements alone.
- Score: 2.822816116516042
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large-scale semantic mapping is crucial for outdoor autonomous agents to fulfill high-level tasks such as planning and navigation. This paper proposes a novel method for large-scale 3D semantic reconstruction through implicit representations from posed LiDAR measurements alone. We first leverage an octree-based and hierarchical structure to store implicit features, then these implicit features are decoded to semantic information and signed distance value through shallow Multilayer Perceptrons (MLPs). We adopt off-the-shelf algorithms to predict the semantic labels and instance IDs of point clouds. We then jointly optimize the feature embeddings and MLPs parameters with a self-supervision paradigm for point cloud geometry and a pseudo-supervision paradigm for semantic and panoptic labels. Subsequently, categories and geometric structures for novel points are regressed, and marching cubes are exploited to subdivide and visualize the scenes in the inferring stage. For scenarios with memory constraints, a map stitching strategy is also developed to merge sub-maps into a complete map. Experiments on two real-world datasets, SemanticKITTI and SemanticPOSS, demonstrate the superior segmentation efficiency and mapping effectiveness of our framework compared to current state-of-the-art 3D LiDAR mapping methods.
Related papers
- Open-Vocabulary Octree-Graph for 3D Scene Understanding [54.11828083068082]
Octree-Graph is a novel scene representation for open-vocabulary 3D scene understanding.
An adaptive-octree structure is developed that stores semantics and depicts the occupancy of an object adjustably according to its shape.
arXiv Detail & Related papers (2024-11-25T10:14:10Z) - Representing 3D sparse map points and lines for camera relocalization [1.2974519529978974]
We show how a lightweight neural network can learn to represent both 3D point and line features.
In tests, our method secures a significant lead, marking the most considerable enhancement over state-of-the-art learning-based methodologies.
arXiv Detail & Related papers (2024-02-28T03:07:05Z) - A Data-efficient Framework for Robotics Large-scale LiDAR Scene Parsing [10.497309421830671]
Existing state-of-the-art 3D point clouds understanding methods only perform well in a fully supervised manner.
This work presents a general and simple framework to tackle point clouds understanding when labels are limited.
arXiv Detail & Related papers (2023-12-03T02:38:51Z) - Volumetric Semantically Consistent 3D Panoptic Mapping [77.13446499924977]
We introduce an online 2D-to-3D semantic instance mapping algorithm aimed at generating semantic 3D maps suitable for autonomous agents in unstructured environments.
It introduces novel ways of integrating semantic prediction confidence during mapping, producing semantic and instance-consistent 3D regions.
The proposed method achieves accuracy superior to the state of the art on public large-scale datasets, improving on a number of widely used metrics.
arXiv Detail & Related papers (2023-09-26T08:03:10Z) - Neural Semantic Surface Maps [52.61017226479506]
We present an automated technique for computing a map between two genus-zero shapes, which matches semantically corresponding regions to one another.
Our approach can generate semantic surface-to-surface maps, eliminating manual annotations or any 3D training data requirement.
arXiv Detail & Related papers (2023-09-09T16:21:56Z) - Deep Semantic Graph Matching for Large-scale Outdoor Point Clouds
Registration [22.308070598885532]
We treat the point cloud registration problem as a semantic instance matching and registration task.
We propose a deep semantic graph matching method (DeepSGM) for large-scale outdoor point cloud registration.
Experimental results conducted on the KITTI Odometry dataset demonstrate that the proposed method improves the registration performance.
arXiv Detail & Related papers (2023-08-10T03:07:28Z) - Flattening-Net: Deep Regular 2D Representation for 3D Point Cloud
Analysis [66.49788145564004]
We present an unsupervised deep neural architecture called Flattening-Net to represent irregular 3D point clouds of arbitrary geometry and topology.
Our methods perform favorably against the current state-of-the-art competitors.
arXiv Detail & Related papers (2022-12-17T15:05:25Z) - Improving Lidar-Based Semantic Segmentation of Top-View Grid Maps by
Learning Features in Complementary Representations [3.0413873719021995]
We introduce a novel way to predict semantic information from sparse, single-shot LiDAR measurements in the context of autonomous driving.
The approach is aimed specifically at improving the semantic segmentation of top-view grid maps.
For each representation a tailored deep learning architecture is developed to effectively extract semantic information.
arXiv Detail & Related papers (2022-03-02T14:49:51Z) - Box2Seg: Learning Semantics of 3D Point Clouds with Box-Level
Supervision [65.19589997822155]
We introduce a neural architecture, termed Box2Seg, to learn point-level semantics of 3D point clouds with bounding box-level supervision.
We show that the proposed network can be trained with cheap, or even off-the-shelf bounding box-level annotations and subcloud-level tags.
arXiv Detail & Related papers (2022-01-09T09:07:48Z) - Campus3D: A Photogrammetry Point Cloud Benchmark for Hierarchical
Understanding of Outdoor Scene [76.4183572058063]
We present a richly-annotated 3D point cloud dataset for multiple outdoor scene understanding tasks.
The dataset has been point-wisely annotated with both hierarchical and instance-based labels.
We formulate a hierarchical learning problem for 3D point cloud segmentation and propose a measurement evaluating consistency across various hierarchies.
arXiv Detail & Related papers (2020-08-11T19:10:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.