LuSeg: Efficient Negative and Positive Obstacles Segmentation via Contrast-Driven Multi-Modal Feature Fusion on the Lunar
- URL: http://arxiv.org/abs/2503.11409v1
- Date: Fri, 14 Mar 2025 13:51:52 GMT
- Title: LuSeg: Efficient Negative and Positive Obstacles Segmentation via Contrast-Driven Multi-Modal Feature Fusion on the Lunar
- Authors: Shuaifeng Jiao, Zhiwen Zeng, Zhuoqun Su, Xieyuanli Chen, Zongtan Zhou, Huimin Lu,
- Abstract summary: We have developed a lunar surface simulation system called the Lunar Exploration Simulator System (LESS)<n>We also propose a novel two-stage segmentation network called LuSeg.<n>LuSeg enforces semantic consistency between the RGB encoder from Stage I and the depth from Stage II.
- Score: 8.215362367428565
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As lunar exploration missions grow increasingly complex, ensuring safe and autonomous rover-based surface exploration has become one of the key challenges in lunar exploration tasks. In this work, we have developed a lunar surface simulation system called the Lunar Exploration Simulator System (LESS) and the LunarSeg dataset, which provides RGB-D data for lunar obstacle segmentation that includes both positive and negative obstacles. Additionally, we propose a novel two-stage segmentation network called LuSeg. Through contrastive learning, it enforces semantic consistency between the RGB encoder from Stage I and the depth encoder from Stage II. Experimental results on our proposed LunarSeg dataset and additional public real-world NPO road obstacle dataset demonstrate that LuSeg achieves state-of-the-art segmentation performance for both positive and negative obstacles while maintaining a high inference speed of approximately 57\,Hz. We have released the implementation of our LESS system, LunarSeg dataset, and the code of LuSeg at:https://github.com/nubot-nudt/LuSeg.
Related papers
- EarthMapper: Visual Autoregressive Models for Controllable Bidirectional Satellite-Map Translation [50.433911327489554]
We introduce EarthMapper, a novel framework for controllable satellite-map translation.
We also contribute CNSatMap, a large-scale dataset comprising 302,132 precisely aligned satellite-map pairs across 38 Chinese cities.
experiments on CNSatMap and the New York dataset demonstrate EarthMapper's superior performance.
arXiv Detail & Related papers (2025-04-28T02:41:12Z) - JiSAM: Alleviate Labeling Burden and Corner Case Problems in Autonomous Driving via Minimal Real-World Data [49.2298619289506]
We propose a plug-and-play method called JiSAM, shorthand for Jittering augmentation, domain-aware backbone and memory-based Sectorized AlignMent.<n>In extensive experiments conducted on the famous AD dataset NuScenes, we demonstrate that, with SOTA 3D object detector, JiSAM is able to utilize the simulation data and only labels on 2.5% available real data to achieve comparable performance to models trained on all real data.
arXiv Detail & Related papers (2025-03-11T13:35:39Z) - MoonMetaSync: Lunar Image Registration Analysis [1.5371340850225041]
This paper compares scale-incubic (SIFT) and scale-variant (ORB) feature detection methods, alongside our novel feature detector, IntFeat, specifically applied to lunar imagery.
We evaluate these methods using low (128x128) and high-resolution (1024x1024) lunar image patches, providing insights into their performance across scales in challenging extraterrestrial environments.
IntFeat combines high-level features from SIFT and low-level features from ORB into a single vector space for robust lunar image registration.
arXiv Detail & Related papers (2024-10-14T22:05:48Z) - Underwater Camouflaged Object Tracking Meets Vision-Language SAM2 [60.47622353256502]
We propose the first large-scale multi-modal underwater camouflaged object tracking dataset, namely UW-COT220.
Based on the proposed dataset, this work first evaluates current advanced visual object tracking methods, including SAM- and SAM2-based trackers, in challenging underwater environments.
Our findings highlight the improvements of SAM2 over SAM, demonstrating its enhanced ability to handle the complexities of underwater camouflaged objects.
arXiv Detail & Related papers (2024-09-25T13:10:03Z) - Synthetic Lunar Terrain: A Multimodal Open Dataset for Training and Evaluating Neuromorphic Vision Algorithms [18.85150427551313]
Synthetic Lunar Terrain (SLT) is an open dataset collected from an analogue test site for lunar missions.
It includes several side-by-side captures from event-based and conventional RGB cameras.
The event-stream recorded from the neuromorphic vision sensor of the event-based camera is of particular interest.
arXiv Detail & Related papers (2024-08-30T02:14:33Z) - LuSNAR:A Lunar Segmentation, Navigation and Reconstruction Dataset based on Muti-sensor for Autonomous Exploration [2.3011380360879237]
Environmental perception and navigation algorithms are the foundation for lunar rovers.
Most of the existing lunar datasets are targeted at a single task.
We propose a multi-task, multi-scene, and multi-label lunar benchmark dataset LuSNAR.
arXiv Detail & Related papers (2024-07-09T02:47:58Z) - On the Generation of a Synthetic Event-Based Vision Dataset for
Navigation and Landing [69.34740063574921]
This paper presents a methodology for generating event-based vision datasets from optimal landing trajectories.
We construct sequences of photorealistic images of the lunar surface with the Planet and Asteroid Natural Scene Generation Utility.
We demonstrate that the pipeline can generate realistic event-based representations of surface features by constructing a dataset of 500 trajectories.
arXiv Detail & Related papers (2023-08-01T09:14:20Z) - AstroSLAM: Autonomous Monocular Navigation in the Vicinity of a
Celestial Small Body -- Theory and Experiments [13.14201332737947]
We propose a vision-based solution for autonomous online navigation around an unknown target small celestial body.
AstroSLAM is predicated on the formulation of the SLAM problem as an incrementally growing factor graph, facilitated by the use of the GTSAM library and the iSAM2 engine.
We incorporate orbital motion constraints into the factor graph by devising a novel relative dynamics factor, which links the relative pose of the spacecraft to the problem of predicting trajectories stemming from the motion of the spacecraft in the vicinity of the small body.
arXiv Detail & Related papers (2022-12-01T08:24:21Z) - LiDARCap: Long-range Marker-less 3D Human Motion Capture with LiDAR
Point Clouds [58.402752909624716]
Existing motion capture datasets are largely short-range and cannot yet fit the need of long-range applications.
We propose LiDARHuman26M, a new human motion capture dataset captured by LiDAR at a much longer range to overcome this limitation.
Our dataset also includes the ground truth human motions acquired by the IMU system and the synchronous RGB images.
arXiv Detail & Related papers (2022-03-28T12:52:45Z) - Batch Exploration with Examples for Scalable Robotic Reinforcement
Learning [63.552788688544254]
Batch Exploration with Examples (BEE) explores relevant regions of the state-space guided by a modest number of human provided images of important states.
BEE is able to tackle challenging vision-based manipulation tasks both in simulation and on a real Franka robot.
arXiv Detail & Related papers (2020-10-22T17:49:25Z) - Accurate RGB-D Salient Object Detection via Collaborative Learning [101.82654054191443]
RGB-D saliency detection shows impressive ability on some challenge scenarios.
We propose a novel collaborative learning framework where edge, depth and saliency are leveraged in a more efficient way.
arXiv Detail & Related papers (2020-07-23T04:33:36Z) - Unsupervised Distribution Learning for Lunar Surface Anomaly Detection [0.0]
We show that modern data-driven machine learning techniques can be successfully applied on lunar surface remote sensing data.
In particular we train an unsupervised distribution learning neural network model to find the Apollo 15 landing module.
arXiv Detail & Related papers (2020-01-14T05:38:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.