Detecting Safety Problems of Multi-Sensor Fusion in Autonomous Driving
- URL: http://arxiv.org/abs/2109.06404v1
- Date: Tue, 14 Sep 2021 02:35:34 GMT
- Title: Detecting Safety Problems of Multi-Sensor Fusion in Autonomous Driving
- Authors: Ziyuan Zhong, Zhisheng Hu, Shengjian Guo, Xinyang Zhang, Zhenyu Zhong,
Baishakhi Ray
- Abstract summary: Multi-sensor fusion (MSF) is used to fuse the sensor inputs and produce a more reliable understanding of the surroundings.
MSF methods in an industry-grade Advanced Driver-Assistance System (ADAS) can mislead the car control and result in serious safety hazards.
We develop a novel evolutionary-based domain-specific search framework, FusionFuzz, for the efficient detection of fusion errors.
- Score: 18.39664775350204
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Autonomous driving (AD) systems have been thriving in recent years. In
general, they receive sensor data, compute driving decisions, and output
control signals to the vehicles. To smooth out the uncertainties brought by
sensor inputs, AD systems usually leverage multi-sensor fusion (MSF) to fuse
the sensor inputs and produce a more reliable understanding of the
surroundings. However, MSF cannot completely eliminate the uncertainties since
it lacks the knowledge about which sensor provides the most accurate data. As a
result, critical consequences might happen unexpectedly. In this work, we
observed that the popular MSF methods in an industry-grade Advanced
Driver-Assistance System (ADAS) can mislead the car control and result in
serious safety hazards. Misbehavior can happen regardless of the used fusion
methods and the accurate data from at least one sensor. To attribute the safety
hazards to a MSF method, we formally define the fusion errors and propose a way
to distinguish safety violations causally induced by such errors. Further, we
develop a novel evolutionary-based domain-specific search framework,
FusionFuzz, for the efficient detection of fusion errors. We evaluate our
framework on two widely used MSF methods. %in two driving environments.
Experimental results show that FusionFuzz identifies more than 150 fusion
errors. Finally, we provide several suggestions to improve the MSF methods
under study.
Related papers
- Joint Attention-Guided Feature Fusion Network for Saliency Detection of
Surface Defects [69.39099029406248]
We propose a joint attention-guided feature fusion network (JAFFNet) for saliency detection of surface defects based on the encoder-decoder network.
JAFFNet mainly incorporates a joint attention-guided feature fusion (JAFF) module into decoding stages to adaptively fuse low-level and high-level features.
Experiments conducted on SD-saliency-900, Magnetic tile, and DAGM 2007 indicate that our method achieves promising performance in comparison with other state-of-the-art methods.
arXiv Detail & Related papers (2024-02-05T08:10:16Z) - Uncertainty-Encoded Multi-Modal Fusion for Robust Object Detection in
Autonomous Driving [8.991012799672713]
This paper proposes Uncertainty-Encoded Mixture-of-Experts (UMoE) that explicitly incorporates single-modal uncertainties into LiDAR-camera fusion.
UMoE achieves a maximum of 10.67%, 3.17%, and 5.40% performance gain compared with the state-of-the-art proposal-level multi-modal object detectors.
arXiv Detail & Related papers (2023-07-30T04:00:41Z) - Multi-Modal 3D Object Detection by Box Matching [109.43430123791684]
We propose a novel Fusion network by Box Matching (FBMNet) for multi-modal 3D detection.
With the learned assignments between 3D and 2D object proposals, the fusion for detection can be effectively performed by combing their ROI features.
arXiv Detail & Related papers (2023-05-12T18:08:51Z) - Enhancing Road Safety through Accurate Detection of Hazardous Driving
Behaviors with Graph Convolutional Recurrent Networks [0.2578242050187029]
We present a reliable Driving Behavior Detection (DBD) system based on Graph Convolutional Long Short-Term Memory Networks (GConvLSTM)
Our proposed model achieved a high accuracy of 97.5% for public sensors and an average accuracy of 98.1% for non-public sensors, indicating its consistency and accuracy in both settings.
Our findings demonstrate that the proposed system can effectively detect hazardous and unsafe driving behavior, with potential applications in improving road safety and reducing the number of accidents caused by driver errors.
arXiv Detail & Related papers (2023-05-08T21:05:36Z) - AutoFed: Heterogeneity-Aware Federated Multimodal Learning for Robust
Autonomous Driving [15.486799633600423]
AutoFed is a framework to fully exploit multimodal sensory data on autonomous vehicles.
We propose a novel model leveraging pseudo-labeling to avoid mistakenly treating unlabeled objects as the background.
We also propose an autoencoder-based data imputation method to fill missing data modality.
arXiv Detail & Related papers (2023-02-17T01:31:53Z) - Safety-Enhanced Autonomous Driving Using Interpretable Sensor Fusion
Transformer [28.15612357340141]
We propose a safety-enhanced autonomous driving framework, named Interpretable Sensor Fusion Transformer(InterFuser)
We process and fuse information from multi-modal multi-view sensors for achieving comprehensive scene understanding and adversarial event detection.
Our framework provides more semantics and are exploited to better constrain actions to be within the safe sets.
arXiv Detail & Related papers (2022-07-28T11:36:21Z) - HydraFusion: Context-Aware Selective Sensor Fusion for Robust and
Efficient Autonomous Vehicle Perception [9.975955132759385]
Techniques to fuse sensor data from camera, radar, and lidar sensors have been proposed to improve autonomous vehicle (AV) perception.
Existing methods are insufficiently robust in difficult driving contexts due to rigidity in their fusion implementations.
We propose HydraFusion: a selective sensor fusion framework that learns to identify the current driving context and fuses the best combination of sensors.
arXiv Detail & Related papers (2022-01-17T22:19:53Z) - Invisible for both Camera and LiDAR: Security of Multi-Sensor Fusion
based Perception in Autonomous Driving Under Physical-World Attacks [62.923992740383966]
We present the first study of security issues of MSF-based perception in AD systems.
We generate a physically-realizable, adversarial 3D-printed object that misleads an AD system to fail in detecting it and thus crash into it.
Our results show that the attack achieves over 90% success rate across different object types and MSF.
arXiv Detail & Related papers (2021-06-17T05:11:07Z) - Multimodal Object Detection via Bayesian Fusion [59.31437166291557]
We study multimodal object detection with RGB and thermal cameras, since the latter can provide much stronger object signatures under poor illumination.
Our key contribution is a non-learned late-fusion method that fuses together bounding box detections from different modalities.
We apply our approach to benchmarks containing both aligned (KAIST) and unaligned (FLIR) multimodal sensor data.
arXiv Detail & Related papers (2021-04-07T04:03:20Z) - Towards robust sensing for Autonomous Vehicles: An adversarial
perspective [82.83630604517249]
It is of primary importance that the resulting decisions are robust to perturbations.
Adversarial perturbations are purposefully crafted alterations of the environment or of the sensory measurements.
A careful evaluation of the vulnerabilities of their sensing system(s) is necessary in order to build and deploy safer systems.
arXiv Detail & Related papers (2020-07-14T05:25:15Z) - Learning Selective Sensor Fusion for States Estimation [47.76590539558037]
We propose SelectFusion, an end-to-end selective sensor fusion module.
During prediction, the network is able to assess the reliability of the latent features from different sensor modalities.
We extensively evaluate all fusion strategies in both public datasets and on progressively degraded datasets.
arXiv Detail & Related papers (2019-12-30T20:25:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.