Benchmarking Robustness of AI-Enabled Multi-sensor Fusion Systems:
Challenges and Opportunities
- URL: http://arxiv.org/abs/2306.03454v2
- Date: Tue, 29 Aug 2023 01:20:04 GMT
- Title: Benchmarking Robustness of AI-Enabled Multi-sensor Fusion Systems:
Challenges and Opportunities
- Authors: Xinyu Gao, Zhijie Wang, Yang Feng, Lei Ma, Zhenyu Chen, Baowen Xu
- Abstract summary: Multi-Sensor Fusion (MSF) based perception systems have been the foundation in supporting many industrial applications and domains, such as self-driving cars, robotic arms, and unmanned aerial vehicles.
Over the past few years, the fast progress in data-driven artificial intelligence (AI) has brought a fast-increasing trend to empower MSF systems by deep learning techniques to further improve performance, especially on intelligent systems and their perception systems.
Although quite a few AI-enabled MSF perception systems and techniques have been proposed, up to the present, limited benchmarks that focus on MSF perception are publicly available.
- Score: 23.460181958075566
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-Sensor Fusion (MSF) based perception systems have been the foundation
in supporting many industrial applications and domains, such as self-driving
cars, robotic arms, and unmanned aerial vehicles. Over the past few years, the
fast progress in data-driven artificial intelligence (AI) has brought a
fast-increasing trend to empower MSF systems by deep learning techniques to
further improve performance, especially on intelligent systems and their
perception systems. Although quite a few AI-enabled MSF perception systems and
techniques have been proposed, up to the present, limited benchmarks that focus
on MSF perception are publicly available. Given that many intelligent systems
such as self-driving cars are operated in safety-critical contexts where
perception systems play an important role, there comes an urgent need for a
more in-depth understanding of the performance and reliability of these MSF
systems. To bridge this gap, we initiate an early step in this direction and
construct a public benchmark of AI-enabled MSF-based perception systems
including three commonly adopted tasks (i.e., object detection, object
tracking, and depth completion). Based on this, to comprehensively understand
MSF systems' robustness and reliability, we design 14 common and realistic
corruption patterns to synthesize large-scale corrupted datasets. We further
perform a systematic evaluation of these systems through our large-scale
evaluation. Our results reveal the vulnerability of the current AI-enabled MSF
perception systems, calling for researchers and practitioners to take
robustness and reliability into account when designing AI-enabled MSF.
Related papers
- Highlighting the Safety Concerns of Deploying LLMs/VLMs in Robotics [54.57914943017522]
We highlight the critical issues of robustness and safety associated with integrating large language models (LLMs) and vision-language models (VLMs) into robotics applications.
arXiv Detail & Related papers (2024-02-15T22:01:45Z) - MultiTest: Physical-Aware Object Insertion for Testing Multi-sensor
Fusion Perception Systems [23.460181958075566]
Multi-sensor fusion (MSF) is a key technique in addressing numerous safety-critical tasks and applications, e.g., self-driving cars and automated robotic arms.
Existing testing methods primarily concentrate on single-sensor perception systems.
We introduce MultiTest, a fitness-guided metamorphic testing method for complex MSF perception systems.
arXiv Detail & Related papers (2024-01-25T17:03:02Z) - AI Maintenance: A Robustness Perspective [91.28724422822003]
We introduce highlighted robustness challenges in the AI lifecycle and motivate AI maintenance by making analogies to car maintenance.
We propose an AI model inspection framework to detect and mitigate robustness risks.
Our proposal for AI maintenance facilitates robustness assessment, status tracking, risk scanning, model hardening, and regulation throughout the AI lifecycle.
arXiv Detail & Related papers (2023-01-08T15:02:38Z) - MMRNet: Improving Reliability for Multimodal Object Detection and
Segmentation for Bin Picking via Multimodal Redundancy [68.7563053122698]
We propose a reliable object detection and segmentation system with MultiModal Redundancy (MMRNet)
This is the first system that introduces the concept of multimodal redundancy to address sensor failure issues during deployment.
We present a new label-free multi-modal consistency (MC) score that utilizes the output from all modalities to measure the overall system output reliability and uncertainty.
arXiv Detail & Related papers (2022-10-19T19:15:07Z) - Multi-AI Complex Systems in Humanitarian Response [0.0]
We describe how multi-AI systems can arise, even in relatively simple real-world humanitarian response scenarios, and lead to potentially emergent and erratic erroneous behavior.
This paper is designed to be a first exposition on this topic in the field of humanitarian response, raising awareness, exploring the possible landscape of this domain, and providing a starting point for future work within the wider community.
arXiv Detail & Related papers (2022-08-24T03:01:21Z) - Statistical Perspectives on Reliability of Artificial Intelligence
Systems [6.284088451820049]
We provide statistical perspectives on the reliability of AI systems.
We introduce a so-called SMART statistical framework for AI reliability research.
We discuss recent developments in modeling and analysis of AI reliability.
arXiv Detail & Related papers (2021-11-09T20:00:14Z) - Trustworthy AI: From Principles to Practices [44.67324097900778]
Many current AI systems were found vulnerable to imperceptible attacks, biased against underrepresented groups, lacking in user privacy protection, etc.
In this review, we strive to provide AI practitioners a comprehensive guide towards building trustworthy AI systems.
To unify the current fragmented approaches towards trustworthy AI, we propose a systematic approach that considers the entire lifecycle of AI systems.
arXiv Detail & Related papers (2021-10-04T03:20:39Z) - Invisible for both Camera and LiDAR: Security of Multi-Sensor Fusion
based Perception in Autonomous Driving Under Physical-World Attacks [62.923992740383966]
We present the first study of security issues of MSF-based perception in AD systems.
We generate a physically-realizable, adversarial 3D-printed object that misleads an AD system to fail in detecting it and thus crash into it.
Our results show that the attack achieves over 90% success rate across different object types and MSF.
arXiv Detail & Related papers (2021-06-17T05:11:07Z) - Technology Readiness Levels for Machine Learning Systems [107.56979560568232]
Development and deployment of machine learning systems can be executed easily with modern tools, but the process is typically rushed and means-to-an-end.
We have developed a proven systems engineering approach for machine learning development and deployment.
Our "Machine Learning Technology Readiness Levels" framework defines a principled process to ensure robust, reliable, and responsible systems.
arXiv Detail & Related papers (2021-01-11T15:54:48Z) - Towards robust sensing for Autonomous Vehicles: An adversarial
perspective [82.83630604517249]
It is of primary importance that the resulting decisions are robust to perturbations.
Adversarial perturbations are purposefully crafted alterations of the environment or of the sensory measurements.
A careful evaluation of the vulnerabilities of their sensing system(s) is necessary in order to build and deploy safer systems.
arXiv Detail & Related papers (2020-07-14T05:25:15Z) - Distributed and Democratized Learning: Philosophy and Research
Challenges [80.39805582015133]
We propose a novel design philosophy called democratized learning (Dem-AI)
Inspired by the societal groups of humans, the specialized groups of learning agents in the proposed Dem-AI system are self-organized in a hierarchical structure to collectively perform learning tasks more efficiently.
We present a reference design as a guideline to realize future Dem-AI systems, inspired by various interdisciplinary fields.
arXiv Detail & Related papers (2020-03-18T08:45:10Z) - AAAI FSS-19: Human-Centered AI: Trustworthiness of AI Models and Data
Proceedings [8.445274192818825]
It is crucial for predictive models to be uncertainty-aware and yield trustworthy predictions.
The focus of this symposium was on AI systems to improve data quality and technical robustness and safety.
submissions from broadly defined areas also discussed approaches addressing requirements such as explainable models, human trust and ethical aspects of AI.
arXiv Detail & Related papers (2020-01-15T15:30:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.