LuSNAR:A Lunar Segmentation, Navigation and Reconstruction Dataset based on Muti-sensor for Autonomous Exploration
- URL: http://arxiv.org/abs/2407.06512v3
- Date: Thu, 26 Sep 2024 02:11:38 GMT
- Title: LuSNAR:A Lunar Segmentation, Navigation and Reconstruction Dataset based on Muti-sensor for Autonomous Exploration
- Authors: Jiayi Liu, Qianyu Zhang, Xue Wan, Shengyang Zhang, Yaolin Tian, Haodong Han, Yutao Zhao, Baichuan Liu, Zeyuan Zhao, Xubo Luo,
- Abstract summary: Environmental perception and navigation algorithms are the foundation for lunar rovers.
Most of the existing lunar datasets are targeted at a single task.
We propose a multi-task, multi-scene, and multi-label lunar benchmark dataset LuSNAR.
- Score: 2.3011380360879237
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the complexity of lunar exploration missions, the moon needs to have a higher level of autonomy. Environmental perception and navigation algorithms are the foundation for lunar rovers to achieve autonomous exploration. The development and verification of algorithms require highly reliable data support. Most of the existing lunar datasets are targeted at a single task, lacking diverse scenes and high-precision ground truth labels. To address this issue, we propose a multi-task, multi-scene, and multi-label lunar benchmark dataset LuSNAR. This dataset can be used for comprehensive evaluation of autonomous perception and navigation systems, including high-resolution stereo image pairs, panoramic semantic labels, dense depth maps, LiDAR point clouds, and the position of rover. In order to provide richer scene data, we built 9 lunar simulation scenes based on Unreal Engine. Each scene is divided according to topographic relief and the density of objects. To verify the usability of the dataset, we evaluated and analyzed the algorithms of semantic segmentation, 3D reconstruction, and autonomous navigation. The experiment results prove that the dataset proposed in this paper can be used for ground verification of tasks such as autonomous environment perception and navigation, and provides a lunar benchmark dataset for testing the accessibility of algorithm metrics. We make LuSNAR publicly available at: https://github.com/zqyu9/LuSNAR-dataset.
Related papers
- MARs: Multi-view Attention Regularizations for Patch-based Feature Recognition of Space Terrain [4.87717454493713]
Current approaches rely on template matching with pre-gathered patch-based features.
We introduce Multi-view Attention Regularizations (MARs) to constrain the channel and spatial attention across multiple feature views.
We demonstrate improved terrain-feature recognition performance by upwards of 85%.
arXiv Detail & Related papers (2024-10-07T16:41:45Z) - Icy Moon Surface Simulation and Stereo Depth Estimation for Sampling
Autonomy [4.97538153735235]
We propose a framework for versatile stereo dataset generation that spans the spectrum of bulk photometric properties.
We also focus on a stereo-based visual perception system and evaluate both traditional and deep learning-based algorithms for depth estimation from stereo matching.
Our framework can fit a wide range of hypotheses with respect to visual representations of icy moon terrains.
arXiv Detail & Related papers (2024-01-23T00:06:19Z) - SUPS: A Simulated Underground Parking Scenario Dataset for Autonomous
Driving [41.221988979184665]
SUPS is a simulated dataset for underground automatic parking.
It supports multiple tasks with multiple sensors and multiple semantic labels aligned with successive images.
We also evaluate the state-of-the-art SLAM algorithms and perception models on our dataset.
arXiv Detail & Related papers (2023-02-25T02:59:12Z) - Navya3DSeg -- Navya 3D Semantic Segmentation Dataset & split generation
for autonomous vehicles [63.20765930558542]
3D semantic data are useful for core perception tasks such as obstacle detection and ego-vehicle localization.
We propose a new dataset, Navya 3D (Navya3DSeg), with a diverse label space corresponding to a large scale production grade operational domain.
It contains 23 labeled sequences and 25 supplementary sequences without labels, designed to explore self-supervised and semi-supervised semantic segmentation benchmarks on point clouds.
arXiv Detail & Related papers (2023-02-16T13:41:19Z) - Argoverse 2: Next Generation Datasets for Self-Driving Perception and
Forecasting [64.7364925689825]
Argoverse 2 (AV2) is a collection of three datasets for perception and forecasting research in the self-driving domain.
The Lidar dataset contains 20,000 sequences of unlabeled lidar point clouds and map-aligned pose.
The Motion Forecasting dataset contains 250,000 scenarios mined for interesting and challenging interactions between the autonomous vehicle and other actors in each local scene.
arXiv Detail & Related papers (2023-01-02T00:36:22Z) - AstroVision: Towards Autonomous Feature Detection and Description for
Missions to Small Bodies Using Deep Learning [14.35670544436183]
This paper introduces AstroVision, a large-scale dataset comprised of 115,970 densely annotated, real images of 16 different small bodies captured during past and ongoing missions.
We leverage AstroVision to develop a set of standardized benchmarks and conduct an exhaustive evaluation of both handcrafted and data-driven feature detection and description methods.
Next, we employ AstroVision for end-to-end training of a state-of-the-art, deep feature detection and description network and demonstrate improved performance on multiple benchmarks.
arXiv Detail & Related papers (2022-08-03T13:18:44Z) - A Spacecraft Dataset for Detection, Segmentation and Parts Recognition [42.27081423489484]
In this paper, we release a dataset for spacecraft detection, instance segmentation and part recognition.
The main contribution of this work is the development of the dataset using images of space stations and satellites.
We also provide evaluations with state-of-the-art methods in object detection and instance segmentation as a benchmark for the dataset.
arXiv Detail & Related papers (2021-06-15T14:36:56Z) - SOON: Scenario Oriented Object Navigation with Graph-based Exploration [102.74649829684617]
The ability to navigate like a human towards a language-guided target from anywhere in a 3D embodied environment is one of the 'holy grail' goals of intelligent robots.
Most visual navigation benchmarks focus on navigating toward a target from a fixed starting point, guided by an elaborate set of instructions that depicts step-by-step.
This approach deviates from real-world problems in which human-only describes what the object and its surrounding look like and asks the robot to start navigation from anywhere.
arXiv Detail & Related papers (2021-03-31T15:01:04Z) - Campus3D: A Photogrammetry Point Cloud Benchmark for Hierarchical
Understanding of Outdoor Scene [76.4183572058063]
We present a richly-annotated 3D point cloud dataset for multiple outdoor scene understanding tasks.
The dataset has been point-wisely annotated with both hierarchical and instance-based labels.
We formulate a hierarchical learning problem for 3D point cloud segmentation and propose a measurement evaluating consistency across various hierarchies.
arXiv Detail & Related papers (2020-08-11T19:10:32Z) - LayoutMP3D: Layout Annotation of Matterport3D [59.11106101006007]
We consider the Matterport3D dataset with their originally provided depth map ground truths and further release our annotations for layout ground truths from a subset of Matterport3D.
Our dataset provides both the layout and depth information, which enables the opportunity to explore the environment by integrating both cues.
arXiv Detail & Related papers (2020-03-30T14:40:56Z) - Deep Learning based Pedestrian Inertial Navigation: Methods, Dataset and
On-Device Inference [49.88536971774444]
Inertial measurements units (IMUs) are small, cheap, energy efficient, and widely employed in smart devices and mobile robots.
Exploiting inertial data for accurate and reliable pedestrian navigation supports is a key component for emerging Internet-of-Things applications and services.
We present and release the Oxford Inertial Odometry dataset (OxIOD), a first-of-its-kind public dataset for deep learning based inertial navigation research.
arXiv Detail & Related papers (2020-01-13T04:41:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.