Real-Time Glass Detection and Reprojection using Sensor Fusion Onboard Aerial Robots
- URL: http://arxiv.org/abs/2510.06518v1
- Date: Tue, 07 Oct 2025 23:31:45 GMT
- Title: Real-Time Glass Detection and Reprojection using Sensor Fusion Onboard Aerial Robots
- Authors: Malakhi Hopkins, Varun Murali, Vijay Kumar, Camillo J Taylor,
- Abstract summary: transparent obstacles present significant challenges to reliable navigation and mapping.<n>We propose a novel and computationally efficient framework for detecting and mapping transparent obstacles onboard a sub-300g quadrotor.<n>Our method fuses data from a Time-of-Flight (ToF) camera and an ultrasonic sensor with a custom, lightweight 2D convolution model.<n>The entire pipeline operates in real-time, utilizing only a small fraction of a CPU core on an embedded processor.
- Score: 9.34137124377529
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Autonomous aerial robots are increasingly being deployed in real-world scenarios, where transparent obstacles present significant challenges to reliable navigation and mapping. These materials pose a unique problem for traditional perception systems because they lack discernible features and can cause conventional depth sensors to fail, leading to inaccurate maps and potential collisions. To ensure safe navigation, robots must be able to accurately detect and map these transparent obstacles. Existing methods often rely on large, expensive sensors or algorithms that impose high computational burdens, making them unsuitable for low Size, Weight, and Power (SWaP) robots. In this work, we propose a novel and computationally efficient framework for detecting and mapping transparent obstacles onboard a sub-300g quadrotor. Our method fuses data from a Time-of-Flight (ToF) camera and an ultrasonic sensor with a custom, lightweight 2D convolution model. This specialized approach accurately detects specular reflections and propagates their depth into corresponding empty regions of the depth map, effectively rendering transparent obstacles visible. The entire pipeline operates in real-time, utilizing only a small fraction of a CPU core on an embedded processor. We validate our system through a series of experiments in both controlled and real-world environments, demonstrating the utility of our method through experiments where the robot maps indoor environments containing glass. Our work is, to our knowledge, the first of its kind to demonstrate a real-time, onboard transparent obstacle mapping system on a low-SWaP quadrotor using only the CPU.
Related papers
- Sight Over Site: Perception-Aware Reinforcement Learning for Efficient Robotic Inspection [57.37596278863949]
In this work, we revisit inspection from a perception-aware perspective.<n>We propose an end-to-end reinforcement learning framework that explicitly incorporates target visibility as the primary objective.<n>We show that our method outperforms existing classical and learning-based navigation approaches.
arXiv Detail & Related papers (2025-09-22T15:14:02Z) - Real-Time Navigation for Autonomous Aerial Vehicles Using Video [11.414350041043326]
We introduce a novel Markov Decision Process(MDP) framework to reduce the workload of Computer Vision(CV) algorithms.<n>We apply our proposed framework to both feature-based and neural-network-based object-detection tasks.<n>These holistic tests show significant benefits in energy consumption and speed with only a limited loss in accuracy.
arXiv Detail & Related papers (2025-04-01T01:14:42Z) - Enhancing Autonomous Navigation by Imaging Hidden Objects using Single-Photon LiDAR [12.183773707869069]
We present a novel approach that leverages Non-Line-of-Sight (NLOS) sensing using single-photon LiDAR to improve visibility and enhance autonomous navigation.<n>Our method enables mobile robots to "see around corners" by utilizing multi-bounce light information.
arXiv Detail & Related papers (2024-10-04T16:03:13Z) - Object Depth and Size Estimation using Stereo-vision and Integration with SLAM [2.122581579741322]
We propose a highly accurate stereo-vision approach to complement LiDAR in autonomous robots.
The system employs advanced stereo vision-based object detection to detect both tangible and non-tangible objects.
The depth and size information is then integrated into the SLAM process to enhance the robot's navigation capabilities in complex environments.
arXiv Detail & Related papers (2024-09-11T21:12:48Z) - Floor extraction and door detection for visually impaired guidance [78.94595951597344]
Finding obstacle-free paths in unknown environments is a big navigation issue for visually impaired people and autonomous robots.
New devices based on computer vision systems can help impaired people to overcome the difficulties of navigating in unknown environments in safe conditions.
In this work it is proposed a combination of sensors and algorithms that can lead to the building of a navigation system for visually impaired people.
arXiv Detail & Related papers (2024-01-30T14:38:43Z) - Efficient Real-time Smoke Filtration with 3D LiDAR for Search and Rescue
with Autonomous Heterogeneous Robotic Systems [56.838297900091426]
Smoke and dust affect the performance of any mobile robotic platform due to their reliance on onboard perception systems.
This paper proposes a novel modular computation filtration pipeline based on intensity and spatial information.
arXiv Detail & Related papers (2023-08-14T16:48:57Z) - DOTIE -- Detecting Objects through Temporal Isolation of Events using a
Spiking Architecture [5.340730281227837]
Vision-based autonomous navigation systems rely on fast and accurate object detection algorithms to avoid obstacles.
We propose a novel technique that utilizes the temporal information inherently present in the events to efficiently detect moving objects.
We show that by utilizing our architecture, autonomous navigation systems can have minimal latency and energy overheads for performing object detection.
arXiv Detail & Related papers (2022-10-03T14:43:11Z) - Drone Detection and Tracking in Real-Time by Fusion of Different Sensing
Modalities [66.4525391417921]
We design and evaluate a multi-sensor drone detection system.
Our solution integrates a fish-eye camera as well to monitor a wider part of the sky and steer the other cameras towards objects of interest.
The thermal camera is shown to be a feasible solution as good as the video camera, even if the camera employed here has a lower resolution.
arXiv Detail & Related papers (2022-07-05T10:00:58Z) - Learning High-Speed Flight in the Wild [101.33104268902208]
We propose an end-to-end approach that can autonomously fly quadrotors through complex natural and man-made environments at high speeds.
The key principle is to directly map noisy sensory observations to collision-free trajectories in a receding-horizon fashion.
By simulating realistic sensor noise, our approach achieves zero-shot transfer from simulation to challenging real-world environments.
arXiv Detail & Related papers (2021-10-11T09:43:11Z) - Towards Robust Monocular Visual Odometry for Flying Robots on Planetary
Missions [49.79068659889639]
Ingenuity, that just landed on Mars, will mark the beginning of a new era of exploration unhindered by traversability.
We present an advanced robust monocular odometry algorithm that uses efficient optical flow tracking.
We also present a novel approach to estimate the current risk of scale drift based on a principal component analysis of the relative translation information matrix.
arXiv Detail & Related papers (2021-09-12T12:52:20Z) - High-Speed Robot Navigation using Predicted Occupancy Maps [0.0]
We study algorithmic approaches that allow the robot to predict spaces extending beyond the sensor horizon for robust planning at high speeds.
We accomplish this using a generative neural network trained from real-world data without requiring human annotated labels.
We extend our existing control algorithms to support leveraging the predicted spaces to improve collision-free planning and navigation at high speeds.
arXiv Detail & Related papers (2020-12-22T16:25:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.