Event Camera and LiDAR based Human Tracking for Adverse Lighting
Conditions in Subterranean Environments
- URL: http://arxiv.org/abs/2304.08908v1
- Date: Tue, 18 Apr 2023 11:27:41 GMT
- Title: Event Camera and LiDAR based Human Tracking for Adverse Lighting
Conditions in Subterranean Environments
- Authors: Mario A.V. Saucedo, Akash Patel, Rucha Sawlekar, Akshit Saradagi,
Christoforos Kanellakis, Ali-Akbar Agha-Mohammadi and George Nikolakopoulos
- Abstract summary: In this article, we propose a novel LiDAR and event camera fusion modality for subterranean environments.
In the proposed approach, information from the event camera and LiDAR are fused to localize a human or an object-of-interest in a robot's local frame.
The efficacy of the proposed scheme has been experimentally validated in a real SubT environment with a Pioneer 3AT mobile robot.
- Score: 2.9988822560180437
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this article, we propose a novel LiDAR and event camera fusion modality
for subterranean (SubT) environments for fast and precise object and human
detection in a wide variety of adverse lighting conditions, such as low or no
light, high-contrast zones and in the presence of blinding light sources. In
the proposed approach, information from the event camera and LiDAR are fused to
localize a human or an object-of-interest in a robot's local frame. The local
detection is then transformed into the inertial frame and used to set
references for a Nonlinear Model Predictive Controller (NMPC) for reactive
tracking of humans or objects in SubT environments. The proposed novel fusion
uses intensity filtering and K-means clustering on the LiDAR point cloud and
frequency filtering and connectivity clustering on the events induced in an
event camera by the returning LiDAR beams. The centroids of the clusters in the
event camera and LiDAR streams are then paired to localize reflective markers
present on safety vests and signs in SubT environments. The efficacy of the
proposed scheme has been experimentally validated in a real SubT environment (a
mine) with a Pioneer 3AT mobile robot. The experimental results show real-time
performance for human detection and the NMPC-based controller allows for
reactive tracking of a human or object of interest, even in complete darkness.
Related papers
- A New Adversarial Perspective for LiDAR-based 3D Object Detection [15.429996348453967]
We introduce a real-world dataset (ROLiD) comprising LiDAR-scanned point clouds of two random objects: water mist and smoke.
We propose a point cloud sequence generation method using a motion and content decomposition generative adversarial network named PCS-GAN.
Experiments demonstrate that adversarial perturbations based on random objects effectively deceive vehicle detection and reduce the recognition rate of 3D object detection models.
arXiv Detail & Related papers (2024-12-17T15:36:55Z) - Efficient Real-time Smoke Filtration with 3D LiDAR for Search and Rescue
with Autonomous Heterogeneous Robotic Systems [56.838297900091426]
Smoke and dust affect the performance of any mobile robotic platform due to their reliance on onboard perception systems.
This paper proposes a novel modular computation filtration pipeline based on intensity and spatial information.
arXiv Detail & Related papers (2023-08-14T16:48:57Z) - On the Generation of a Synthetic Event-Based Vision Dataset for
Navigation and Landing [69.34740063574921]
This paper presents a methodology for generating event-based vision datasets from optimal landing trajectories.
We construct sequences of photorealistic images of the lunar surface with the Planet and Asteroid Natural Scene Generation Utility.
We demonstrate that the pipeline can generate realistic event-based representations of surface features by constructing a dataset of 500 trajectories.
arXiv Detail & Related papers (2023-08-01T09:14:20Z) - FEDORA: Flying Event Dataset fOr Reactive behAvior [9.470870778715689]
Event-based sensors have emerged as low latency and low energy alternatives to standard frame-based cameras for capturing high-speed motion.
We present Flying Event dataset fOr Reactive behAviour (FEDORA) - a fully synthetic dataset for perception tasks.
arXiv Detail & Related papers (2023-05-22T22:59:05Z) - Event-based Simultaneous Localization and Mapping: A Comprehensive Survey [52.73728442921428]
Review of event-based vSLAM algorithms that exploit the benefits of asynchronous and irregular event streams for localization and mapping tasks.
Paper categorizes event-based vSLAM methods into four main categories: feature-based, direct, motion-compensation, and deep learning methods.
arXiv Detail & Related papers (2023-04-19T16:21:14Z) - RaLiBEV: Radar and LiDAR BEV Fusion Learning for Anchor Box Free Object
Detection Systems [13.046347364043594]
In autonomous driving, LiDAR and radar are crucial for environmental perception.
Recent state-of-the-art works reveal that the fusion of radar and LiDAR can lead to robust detection in adverse weather.
We propose a bird's-eye view fusion learning-based anchor box-free object detection system.
arXiv Detail & Related papers (2022-11-11T10:24:42Z) - LiDAR-guided object search and detection in Subterranean Environments [12.265807098187297]
This work utilizes the complementary nature of vision and depth sensors to leverage multi-modal information to aid object detection at longer distances.
The proposed work has been thoroughly verified using an ANYmal quadruped robot in underground settings and on datasets collected during the DARPA Subterranean Challenge finals.
arXiv Detail & Related papers (2022-10-26T19:38:19Z) - Event Guided Depth Sensing [50.997474285910734]
We present an efficient bio-inspired event-camera-driven depth estimation algorithm.
In our approach, we illuminate areas of interest densely, depending on the scene activity detected by the event camera.
We show the feasibility of our approach in a simulated autonomous driving sequences and real indoor environments.
arXiv Detail & Related papers (2021-10-20T11:41:11Z) - Cycle and Semantic Consistent Adversarial Domain Adaptation for Reducing
Simulation-to-Real Domain Shift in LiDAR Bird's Eye View [110.83289076967895]
We present a BEV domain adaptation method based on CycleGAN that uses prior semantic classification in order to preserve the information of small objects of interest during the domain adaptation process.
The quality of the generated BEVs has been evaluated using a state-of-the-art 3D object detection framework at KITTI 3D Object Detection Benchmark.
arXiv Detail & Related papers (2021-04-22T12:47:37Z) - LIBRE: The Multiple 3D LiDAR Dataset [54.25307983677663]
We present LIBRE: LiDAR Benchmarking and Reference, a first-of-its-kind dataset featuring 10 different LiDAR sensors.
LIBRE will contribute to the research community to provide a means for a fair comparison of currently available LiDARs.
It will also facilitate the improvement of existing self-driving vehicles and robotics-related software.
arXiv Detail & Related papers (2020-03-13T06:17:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.