LiDAR-guided object search and detection in Subterranean Environments
- URL: http://arxiv.org/abs/2210.14997v1
- Date: Wed, 26 Oct 2022 19:38:19 GMT
- Title: LiDAR-guided object search and detection in Subterranean Environments
- Authors: Manthan Patel, Gabriel Waibel, Shehryar Khattak, Marco Hutter
- Abstract summary: This work utilizes the complementary nature of vision and depth sensors to leverage multi-modal information to aid object detection at longer distances.
The proposed work has been thoroughly verified using an ANYmal quadruped robot in underground settings and on datasets collected during the DARPA Subterranean Challenge finals.
- Score: 12.265807098187297
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Detecting objects of interest, such as human survivors, safety equipment, and
structure access points, is critical to any search-and-rescue operation. Robots
deployed for such time-sensitive efforts rely on their onboard sensors to
perform their designated tasks. However, as disaster response operations are
predominantly conducted under perceptually degraded conditions, commonly
utilized sensors such as visual cameras and LiDARs suffer in terms of
performance degradation. In response, this work presents a method that utilizes
the complementary nature of vision and depth sensors to leverage multi-modal
information to aid object detection at longer distances. In particular, depth
and intensity values from sparse LiDAR returns are used to generate proposals
for objects present in the environment. These proposals are then utilized by a
Pan-Tilt-Zoom (PTZ) camera system to perform a directed search by adjusting its
pose and zoom level for performing object detection and classification in
difficult environments. The proposed work has been thoroughly verified using an
ANYmal quadruped robot in underground settings and on datasets collected during
the DARPA Subterranean Challenge finals.
Related papers
- Object Depth and Size Estimation using Stereo-vision and Integration with SLAM [2.122581579741322]
We propose a highly accurate stereo-vision approach to complement LiDAR in autonomous robots.
The system employs advanced stereo vision-based object detection to detect both tangible and non-tangible objects.
The depth and size information is then integrated into the SLAM process to enhance the robot's navigation capabilities in complex environments.
arXiv Detail & Related papers (2024-09-11T21:12:48Z) - Performance Assessment of Feature Detection Methods for 2-D FS Sonar Imagery [11.23455335391121]
Key challenges include non-uniform lighting and poor visibility in turbid environments.
High-frequency forward-look sonar cameras address these issues.
We evaluate a number of feature detectors using real sonar images from five different sonar devices.
arXiv Detail & Related papers (2024-09-11T04:35:07Z) - Object Detectors in the Open Environment: Challenges, Solutions, and Outlook [95.3317059617271]
The dynamic and intricate nature of the open environment poses novel and formidable challenges to object detectors.
This paper aims to conduct a comprehensive review and analysis of object detectors in open environments.
We propose a framework that includes four quadrants (i.e., out-of-domain, out-of-category, robust learning, and incremental learning) based on the dimensions of the data / target changes.
arXiv Detail & Related papers (2024-03-24T19:32:39Z) - Efficient Real-time Smoke Filtration with 3D LiDAR for Search and Rescue
with Autonomous Heterogeneous Robotic Systems [56.838297900091426]
Smoke and dust affect the performance of any mobile robotic platform due to their reliance on onboard perception systems.
This paper proposes a novel modular computation filtration pipeline based on intensity and spatial information.
arXiv Detail & Related papers (2023-08-14T16:48:57Z) - A Systematic Study on Object Recognition Using Millimeter-wave Radar [1.3192560874022086]
millimeter-wave (MMW) radars are essential in smart environments.
MMW radars are expensive and hard to get for community-purpose smart environment applications.
These challenges need to be investigated for tasks like recognizing objects and activities.
arXiv Detail & Related papers (2023-05-03T12:42:44Z) - Camera-Radar Perception for Autonomous Vehicles and ADAS: Concepts,
Datasets and Metrics [77.34726150561087]
This work aims to carry out a study on the current scenario of camera and radar-based perception for ADAS and autonomous vehicles.
Concepts and characteristics related to both sensors, as well as to their fusion, are presented.
We give an overview of the Deep Learning-based detection and segmentation tasks, and the main datasets, metrics, challenges, and open questions in vehicle perception.
arXiv Detail & Related papers (2023-03-08T00:48:32Z) - Comparative study of 3D object detection frameworks based on LiDAR data
and sensor fusion techniques [0.0]
The perception system plays a significant role in providing an accurate interpretation of a vehicle's environment in real-time.
Deep learning techniques transform the huge amount of data from the sensors into semantic information.
3D object detection methods, by utilizing the additional pose data from the sensors such as LiDARs, stereo cameras, provides information on the size and location of the object.
arXiv Detail & Related papers (2022-02-05T09:34:58Z) - Domain and Modality Gaps for LiDAR-based Person Detection on Mobile
Robots [91.01747068273666]
This paper studies existing LiDAR-based person detectors with a particular focus on mobile robot scenarios.
Experiments revolve around the domain gap between driving and mobile robot scenarios, as well as the modality gap between 3D and 2D LiDAR sensors.
Results provide practical insights into LiDAR-based person detection and facilitate informed decisions for relevant mobile robot designs and applications.
arXiv Detail & Related papers (2021-06-21T16:35:49Z) - Robust Object Detection via Instance-Level Temporal Cycle Confusion [89.1027433760578]
We study the effectiveness of auxiliary self-supervised tasks to improve the out-of-distribution generalization of object detectors.
Inspired by the principle of maximum entropy, we introduce a novel self-supervised task, instance-level temporal cycle confusion (CycConf)
For each object, the task is to find the most different object proposals in the adjacent frame in a video and then cycle back to itself for self-supervision.
arXiv Detail & Related papers (2021-04-16T21:35:08Z) - Channel Boosting Feature Ensemble for Radar-based Object Detection [6.810856082577402]
Radar-based object detection is explored provides a counterpart sensor modality to be deployed and used in adverse weather conditions.
The proposed method's efficacy is extensively evaluated using the COCO evaluation metric.
arXiv Detail & Related papers (2021-01-10T12:20:58Z) - Perceiving Traffic from Aerial Images [86.994032967469]
We propose an object detection method called Butterfly Detector that is tailored to detect objects in aerial images.
We evaluate our Butterfly Detector on two publicly available UAV datasets (UAVDT and VisDrone 2019) and show that it outperforms previous state-of-the-art methods while remaining real-time.
arXiv Detail & Related papers (2020-09-16T11:37:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.