Domain and Modality Gaps for LiDAR-based Person Detection on Mobile
Robots
- URL: http://arxiv.org/abs/2106.11239v1
- Date: Mon, 21 Jun 2021 16:35:49 GMT
- Title: Domain and Modality Gaps for LiDAR-based Person Detection on Mobile
Robots
- Authors: Dan Jia and Alexander Hermans and Bastian Leibe
- Abstract summary: This paper studies existing LiDAR-based person detectors with a particular focus on mobile robot scenarios.
Experiments revolve around the domain gap between driving and mobile robot scenarios, as well as the modality gap between 3D and 2D LiDAR sensors.
Results provide practical insights into LiDAR-based person detection and facilitate informed decisions for relevant mobile robot designs and applications.
- Score: 91.01747068273666
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Person detection is a crucial task for mobile robots navigating in
human-populated environments and LiDAR sensors are promising for this task,
given their accurate depth measurements and large field of view. This paper
studies existing LiDAR-based person detectors with a particular focus on mobile
robot scenarios (e.g. service robot or social robot), where persons are
observed more frequently and in much closer ranges, compared to the driving
scenarios. We conduct a series of experiments, using the recently released
JackRabbot dataset and the state-of-the-art detectors based on 3D or 2D LiDAR
sensors (CenterPoint and DR-SPAAM respectively). These experiments revolve
around the domain gap between driving and mobile robot scenarios, as well as
the modality gap between 3D and 2D LiDAR sensors. For the domain gap, we aim to
understand if detectors pretrained on driving datasets can achieve good
performance on the mobile robot scenarios, for which there are currently no
trained models readily available. For the modality gap, we compare detectors
that use 3D or 2D LiDAR, from various aspects, including performance, runtime,
localization accuracy, robustness to range and crowdedness. The results from
our experiments provide practical insights into LiDAR-based person detection
and facilitate informed decisions for relevant mobile robot designs and
applications.
Related papers
- Tiny Robotics Dataset and Benchmark for Continual Object Detection [6.4036245876073234]
This work introduces a novel benchmark to evaluate the continual learning capabilities of object detection systems in tiny robotic platforms.
Our contributions include: (i) Tiny Robotics Object Detection (TiROD), a comprehensive dataset collected using a small mobile robot, designed to test the adaptability of object detectors across various domains and classes; (ii) an evaluation of state-of-the-art real-time object detectors combined with different continual learning strategies on this dataset; and (iii) we publish the data and the code to replicate the results to foster continuous advancements in this field.
arXiv Detail & Related papers (2024-09-24T16:21:27Z) - UADA3D: Unsupervised Adversarial Domain Adaptation for 3D Object Detection with Sparse LiDAR and Large Domain Gaps [2.79552147676281]
We introduce Unsupervised Adversarial Domain Adaptation for 3D Object Detection (UADA3D)
We demonstrate its efficacy in various adaptation scenarios, showing significant improvements in both self-driving car and mobile robot domains.
Our code is open-source and will be available soon.
arXiv Detail & Related papers (2024-03-26T12:08:14Z) - Multimodal Anomaly Detection based on Deep Auto-Encoder for Object Slip
Perception of Mobile Manipulation Robots [22.63980025871784]
The proposed framework integrates heterogeneous data streams collected from various robot sensors, including RGB and depth cameras, a microphone, and a force-torque sensor.
The integrated data is used to train a deep autoencoder to construct latent representations of the multisensory data that indicate the normal status.
Anomalies can then be identified by error scores measured by the difference between the trained encoder's latent values and the latent values of reconstructed input data.
arXiv Detail & Related papers (2024-03-06T09:15:53Z) - Care3D: An Active 3D Object Detection Dataset of Real Robotic-Care
Environments [52.425280825457385]
This paper introduces an annotated dataset of real environments.
The captured environments represent areas which are already in use in the field of robotic health care research.
We also provide ground truth data within one room, for assessing SLAM algorithms running directly on a health care robot.
arXiv Detail & Related papers (2023-10-09T10:35:37Z) - Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features [69.47588461101925]
We propose a method to adapt 3D object detectors to new driving environments.
Our approach enhances LiDAR-based detection models using spatial quantized historical features.
Experiments on real-world datasets demonstrate significant improvements.
arXiv Detail & Related papers (2023-09-21T15:00:31Z) - Comparative study of 3D object detection frameworks based on LiDAR data
and sensor fusion techniques [0.0]
The perception system plays a significant role in providing an accurate interpretation of a vehicle's environment in real-time.
Deep learning techniques transform the huge amount of data from the sensors into semantic information.
3D object detection methods, by utilizing the additional pose data from the sensors such as LiDARs, stereo cameras, provides information on the size and location of the object.
arXiv Detail & Related papers (2022-02-05T09:34:58Z) - Cross-Modal Analysis of Human Detection for Robotics: An Industrial Case
Study [7.844709223688293]
We conduct a systematic cross-modal analysis of sensor-algorithm combinations typically used in robotics.
We compare the performance of state-of-the-art person detectors for 2D range data, 3D lidar, and RGB-D data.
We extend a strong image-based RGB-D detector to provide cross-modal supervision for lidar detectors in the form of weak 3D bounding box labels.
arXiv Detail & Related papers (2021-08-03T13:33:37Z) - Achieving Real-Time LiDAR 3D Object Detection on a Mobile Device [53.323878851563414]
We propose a compiler-aware unified framework incorporating network enhancement and pruning search with the reinforcement learning techniques.
Specifically, a generator Recurrent Neural Network (RNN) is employed to provide the unified scheme for both network enhancement and pruning search automatically.
The proposed framework achieves real-time 3D object detection on mobile devices with competitive detection performance.
arXiv Detail & Related papers (2020-12-26T19:41:15Z) - Self-Supervised Person Detection in 2D Range Data using a Calibrated
Camera [83.31666463259849]
We propose a method to automatically generate training labels (called pseudo-labels) for 2D LiDAR-based person detectors.
We show that self-supervised detectors, trained or fine-tuned with pseudo-labels, outperform detectors trained using manual annotations.
Our method is an effective way to improve person detectors during deployment without any additional labeling effort.
arXiv Detail & Related papers (2020-12-16T12:10:04Z) - LIBRE: The Multiple 3D LiDAR Dataset [54.25307983677663]
We present LIBRE: LiDAR Benchmarking and Reference, a first-of-its-kind dataset featuring 10 different LiDAR sensors.
LIBRE will contribute to the research community to provide a means for a fair comparison of currently available LiDARs.
It will also facilitate the improvement of existing self-driving vehicles and robotics-related software.
arXiv Detail & Related papers (2020-03-13T06:17:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.