Perception Entropy: A Metric for Multiple Sensors Configuration
Evaluation and Design
- URL: http://arxiv.org/abs/2104.06615v1
- Date: Wed, 14 Apr 2021 03:52:57 GMT
- Title: Perception Entropy: A Metric for Multiple Sensors Configuration
Evaluation and Design
- Authors: Tao Ma, Zhizheng Liu, Yikang Li
- Abstract summary: A well-designed sensor configuration significantly improves the performance upper bound of the perception system.
We propose a novel method based on conditional entropy in Bayesian theory to evaluate the sensor configurations containing both cameras and LiDARs.
To the best of our knowledge, this is the first method to tackle the multi-sensor configuration problem for autonomous vehicles.
- Score: 17.979248163548288
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Sensor configuration, including the sensor selections and their installation
locations, serves a crucial role in autonomous driving. A well-designed sensor
configuration significantly improves the performance upper bound of the
perception system. However, as leveraging multiple sensors is becoming the
mainstream setting, existing methods mainly focusing on single-sensor
configuration problems are hardly utilized in practice. To tackle these issues,
we propose a novel method based on conditional entropy in Bayesian theory to
evaluate the sensor configurations containing both cameras and LiDARs.
Correspondingly, an evaluation metric, perception entropy, is introduced to
measure the difference between two configurations, which considers both the
perception algorithm performance and the selections of the sensors. To the best
of our knowledge, this is the first method to tackle the multi-sensor
configuration problem for autonomous vehicles. The simulation results,
extensive comparisons, and analysis all demonstrate the superior performance of
our proposed approach.
Related papers
- Condition-Aware Multimodal Fusion for Robust Semantic Perception of Driving Scenes [56.52618054240197]
We propose a novel, condition-aware multimodal fusion approach for robust semantic perception of driving scenes.
Our method, CAFuser, uses an RGB camera input to classify environmental conditions and generate a Condition Token that guides the fusion of multiple sensor modalities.
We set the new state of the art with CAFuser on the MUSES dataset with 59.7 PQ for multimodal panoptic segmentation and 78.2 mIoU for semantic segmentation, ranking first on the public benchmarks.
arXiv Detail & Related papers (2024-10-14T17:56:20Z) - Data-Based Design of Multi-Model Inferential Sensors [0.0]
The nonlinear character of industrial processes is usually the main limitation to designing simple linear inferential sensors.
We propose two novel approaches for the design of multi-model inferential sensors.
The results show substantial improvements over the state-of-the-art design techniques for single-/multi-model inferential sensors.
arXiv Detail & Related papers (2023-08-05T12:55:15Z) - Data-Induced Interactions of Sparse Sensors [3.050919759387984]
We take a thermodynamic view to compute the full landscape of sensor interactions induced by the training data.
Mapping out these data-induced sensor interactions allows combining them with external selection criteria and anticipating sensor replacement impacts.
arXiv Detail & Related papers (2023-07-21T18:13:37Z) - A Real-time Human Pose Estimation Approach for Optimal Sensor Placement
in Sensor-based Human Activity Recognition [63.26015736148707]
This paper introduces a novel methodology to resolve the issue of optimal sensor placement for Human Activity Recognition.
The derived skeleton data provides a unique strategy for identifying the optimal sensor location.
Our findings indicate that the vision-based method for sensor placement offers comparable results to the conventional deep learning approach.
arXiv Detail & Related papers (2023-07-06T10:38:14Z) - Bandit Quickest Changepoint Detection [55.855465482260165]
Continuous monitoring of every sensor can be expensive due to resource constraints.
We derive an information-theoretic lower bound on the detection delay for a general class of finitely parameterized probability distributions.
We propose a computationally efficient online sensing scheme, which seamlessly balances the need for exploration of different sensing options with exploitation of querying informative actions.
arXiv Detail & Related papers (2021-07-22T07:25:35Z) - Automatic Extrinsic Calibration Method for LiDAR and Camera Sensor
Setups [68.8204255655161]
We present a method to calibrate the parameters of any pair of sensors involving LiDARs, monocular or stereo cameras.
The proposed approach can handle devices with very different resolutions and poses, as usually found in vehicle setups.
arXiv Detail & Related papers (2021-01-12T12:02:26Z) - Investigating the Effect of Sensor Modalities in Multi-Sensor
Detection-Prediction Models [8.354898936252516]
We focus on the contribution of sensor modalities towards the model performance.
In addition, we investigate the use of sensor dropout to mitigate the above-mentioned issues.
arXiv Detail & Related papers (2021-01-09T03:21:36Z) - Ant Colony Inspired Machine Learning Algorithm for Identifying and
Emulating Virtual Sensors [0.0]
It should be possible to emulate the output of certain sensors based on other sensors.
In order to identify the subset of sensors whose readings can be emulated, the sensors must be grouped into clusters.
This paper proposes an end-to-end algorithmic solution, to realise virtual sensors in such systems.
arXiv Detail & Related papers (2020-11-02T09:06:14Z) - Deep Soft Procrustes for Markerless Volumetric Sensor Alignment [81.13055566952221]
In this work, we improve markerless data-driven correspondence estimation to achieve more robust multi-sensor spatial alignment.
We incorporate geometric constraints in an end-to-end manner into a typical segmentation based model and bridge the intermediate dense classification task with the targeted pose estimation one.
Our model is experimentally shown to achieve similar results with marker-based methods and outperform the markerless ones, while also being robust to the pose variations of the calibration structure.
arXiv Detail & Related papers (2020-03-23T10:51:32Z) - Redesigning SLAM for Arbitrary Multi-Camera Systems [51.81798192085111]
Adding more cameras to SLAM systems improves robustness and accuracy but complicates the design of the visual front-end significantly.
In this work, we aim at an adaptive SLAM system that works for arbitrary multi-camera setups.
We adapt a state-of-the-art visual-inertial odometry with these modifications, and experimental results show that the modified pipeline can adapt to a wide range of camera setups.
arXiv Detail & Related papers (2020-03-04T11:44:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.