SuperEx: Enhancing Indoor Mapping and Exploration using Non-Line-of-Sight Perception
- URL: http://arxiv.org/abs/2510.10506v1
- Date: Sun, 12 Oct 2025 08:52:20 GMT
- Title: SuperEx: Enhancing Indoor Mapping and Exploration using Non-Line-of-Sight Perception
- Authors: Kush Garg, Akshat Dave,
- Abstract summary: In this work, we bring non-line-of-sight (NLOS) sensing to robotic exploration.<n>We leverage single-photon LiDARs, which capture time-of-flight histograms that encode the presence of hidden objects.<n>We introduce SuperEx, a framework that integrates NLOS sensing directly into the mapping-exploration loop.
- Score: 4.882290104560518
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Efficient exploration and mapping in unknown indoor environments is a fundamental challenge, with high stakes in time-critical settings. In current systems, robot perception remains confined to line-of-sight; occluded regions remain unknown until physically traversed, leading to inefficient exploration when layouts deviate from prior assumptions. In this work, we bring non-line-of-sight (NLOS) sensing to robotic exploration. We leverage single-photon LiDARs, which capture time-of-flight histograms that encode the presence of hidden objects - allowing robots to look around blind corners. Recent single-photon LiDARs have become practical and portable, enabling deployment beyond controlled lab settings. Prior NLOS works target 3D reconstruction in static, lab-based scenarios, and initial efforts toward NLOS-aided navigation consider simplified geometries. We introduce SuperEx, a framework that integrates NLOS sensing directly into the mapping-exploration loop. SuperEx augments global map prediction with beyond-line-of-sight cues by (i) carving empty NLOS regions from timing histograms and (ii) reconstructing occupied structure via a two-step physics-based and data-driven approach that leverages structural regularities. Evaluations on complex simulated maps and the real-world KTH Floorplan dataset show a 12% gain in mapping accuracy under < 30% coverage and improved exploration efficiency compared to line-of-sight baselines, opening a path to reliable mapping beyond direct visibility.
Related papers
- WildOS: Open-Vocabulary Object Search in the Wild [12.098091049832965]
This work presents WildOS, a unified system for long-range, open-vocabulary object search.<n>We use a foundation-model-based vision module, ExploRFM, to score frontier nodes of the graph.<n>We also introduce a particle-filter-based method for coarse localization of the open-vocabulary target query.
arXiv Detail & Related papers (2026-02-22T19:14:00Z) - Boosting Zero-Shot VLN via Abstract Obstacle Map-Based Waypoint Prediction with TopoGraph-and-VisitInfo-Aware Prompting [18.325003967982827]
Vision-language navigation (VLN) has emerged as a key task for embodied agents with broad practical applications.<n>We propose a zero-shot framework that integrates a simplified yet effective waypoint predictor with a multimodal large language model (MLLM)<n>Experiments on R2R-CE and RxR-CE show that our method achieves state-of-the-art zero-shot performance, with success rates of 41% and 36%, respectively.
arXiv Detail & Related papers (2025-09-24T19:21:39Z) - Sight Over Site: Perception-Aware Reinforcement Learning for Efficient Robotic Inspection [57.37596278863949]
In this work, we revisit inspection from a perception-aware perspective.<n>We propose an end-to-end reinforcement learning framework that explicitly incorporates target visibility as the primary objective.<n>We show that our method outperforms existing classical and learning-based navigation approaches.
arXiv Detail & Related papers (2025-09-22T15:14:02Z) - Move to Understand a 3D Scene: Bridging Visual Grounding and Exploration for Efficient and Versatile Embodied Navigation [54.04601077224252]
Embodied scene understanding requires not only comprehending visual-spatial information but also determining where to explore next in the 3D physical world.<n>underlinetextbf3D vision-language learning enables embodied agents to effectively explore and understand their environment.<n>model's versatility enables navigation using diverse input modalities, including categories, language descriptions, and reference images.
arXiv Detail & Related papers (2025-07-05T14:15:52Z) - NOVA: Navigation via Object-Centric Visual Autonomy for High-Speed Target Tracking in Unstructured GPS-Denied Environments [56.35569661650558]
We introduce NOVA, a fully onboard, object-centric framework that enables robust target tracking and collision-aware navigation.<n>Rather than constructing a global map, NOVA formulates perception, estimation, and control entirely in the target's reference frame.<n>We validate NOVA across challenging real-world scenarios, including urban mazes, forest trails, and repeated transitions through buildings with intermittent GPS loss.
arXiv Detail & Related papers (2025-06-23T14:28:30Z) - FrontierNet: Learning Visual Cues to Explore [54.8265603996238]
This work aims at leveraging 2D visual cues for efficient autonomous exploration, addressing the limitations of extracting goal poses from a 3D map.<n>We propose a visual-only frontier-based exploration system, with FrontierNet as its core component.<n>Our approach provides an alternative to existing 3D-dependent goal-extraction approaches, achieving a 15% improvement in early-stage exploration efficiency.
arXiv Detail & Related papers (2025-01-08T16:25:32Z) - Enhancing Autonomous Navigation by Imaging Hidden Objects using Single-Photon LiDAR [12.183773707869069]
We present a novel approach that leverages Non-Line-of-Sight (NLOS) sensing using single-photon LiDAR to improve visibility and enhance autonomous navigation.<n>Our method enables mobile robots to "see around corners" by utilizing multi-bounce light information.
arXiv Detail & Related papers (2024-10-04T16:03:13Z) - OccNeRF: Advancing 3D Occupancy Prediction in LiDAR-Free Environments [77.0399450848749]
We propose an OccNeRF method for training occupancy networks without 3D supervision.
We parameterize the reconstructed occupancy fields and reorganize the sampling strategy to align with the cameras' infinite perceptive range.
For semantic occupancy prediction, we design several strategies to polish the prompts and filter the outputs of a pretrained open-vocabulary 2D segmentation model.
arXiv Detail & Related papers (2023-12-14T18:58:52Z) - Occupancy Anticipation for Efficient Exploration and Navigation [97.17517060585875]
We propose occupancy anticipation, where the agent uses its egocentric RGB-D observations to infer the occupancy state beyond the visible regions.
By exploiting context in both the egocentric views and top-down maps our model successfully anticipates a broader map of the environment.
Our approach is the winning entry in the 2020 Habitat PointNav Challenge.
arXiv Detail & Related papers (2020-08-21T03:16:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.