An Exploratory Study of AI System Risk Assessment from the Lens of Data
Distribution and Uncertainty
- URL: http://arxiv.org/abs/2212.06828v1
- Date: Tue, 13 Dec 2022 03:34:25 GMT
- Title: An Exploratory Study of AI System Risk Assessment from the Lens of Data
Distribution and Uncertainty
- Authors: Zhijie Wang, Yuheng Huang, Lei Ma, Haruki Yokoyama, Susumu Tokumoto,
Kazuki Munakata
- Abstract summary: Deep learning (DL) has become a driving force and has been widely adopted in many domains and applications.
This paper initiates an early exploratory study of AI system risk assessment from both the data distribution and uncertainty angles.
- Score: 4.99372598361924
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep learning (DL) has become a driving force and has been widely adopted in
many domains and applications with competitive performance. In practice, to
solve the nontrivial and complicated tasks in real-world applications, DL is
often not used standalone, but instead contributes as a piece of gadget of a
larger complex AI system. Although there comes a fast increasing trend to study
the quality issues of deep neural networks (DNNs) at the model level, few
studies have been performed to investigate the quality of DNNs at both the unit
level and the potential impacts on the system level. More importantly, it also
lacks systematic investigation on how to perform the risk assessment for AI
systems from unit level to system level. To bridge this gap, this paper
initiates an early exploratory study of AI system risk assessment from both the
data distribution and uncertainty angles to address these issues. We propose a
general framework with an exploratory study for analyzing AI systems. After
large-scale (700+ experimental configurations and 5000+ GPU hours) experiments
and in-depth investigations, we reached a few key interesting findings that
highlight the practical need and opportunities for more in-depth investigations
into AI systems.
Related papers
- A Survey on Failure Analysis and Fault Injection in AI Systems [28.30817443151044]
The complexity of AI systems has exposed their vulnerabilities, necessitating robust methods for failure analysis (FA) and fault injection (FI) to ensure resilience and reliability.
This study fills this gap by presenting a detailed survey of existing FA and FI approaches across six layers of AI systems.
Our findings reveal a taxonomy of AI system failures, assess the capabilities of existing FI tools, and highlight discrepancies between real-world and simulated failures.
arXiv Detail & Related papers (2024-06-28T00:32:03Z) - Testing autonomous vehicles and AI: perspectives and challenges from cybersecurity, transparency, robustness and fairness [53.91018508439669]
The study explores the complexities of integrating Artificial Intelligence into Autonomous Vehicles (AVs)
It examines the challenges introduced by AI components and the impact on testing procedures.
The paper identifies significant challenges and suggests future directions for research and development of AI in AV technology.
arXiv Detail & Related papers (2024-02-21T08:29:42Z) - Human-Centric Multimodal Machine Learning: Recent Advances and Testbed
on AI-based Recruitment [66.91538273487379]
There is a certain consensus about the need to develop AI applications with a Human-Centric approach.
Human-Centric Machine Learning needs to be developed based on four main requirements: (i) utility and social good; (ii) privacy and data ownership; (iii) transparency and accountability; and (iv) fairness in AI-driven decision-making processes.
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
arXiv Detail & Related papers (2023-02-13T16:44:44Z) - SAIH: A Scalable Evaluation Methodology for Understanding AI Performance
Trend on HPC Systems [18.699431277588637]
We propose a scalable evaluation methodology (SAIH) for analyzing the AI performance trend of HPC systems.
As the data and model constantly scale, we can investigate the trend and range of AI performance on HPC systems.
arXiv Detail & Related papers (2022-12-07T02:42:29Z) - A Survey on Large-Population Systems and Scalable Multi-Agent
Reinforcement Learning [18.918558716102144]
We will shed light on current approaches to tractably understanding and analyzing large-population systems.
We will survey potential areas of application for large-scale control and identify fruitful future applications of learning algorithms in practical systems.
arXiv Detail & Related papers (2022-09-08T14:58:50Z) - Proceedings of the Robust Artificial Intelligence System Assurance
(RAISA) Workshop 2022 [0.0]
The RAISA workshop will focus on research, development and application of robust artificial intelligence (AI) and machine learning (ML) systems.
Rather than studying robustness with respect to particular ML algorithms, our approach will be to explore robustness assurance at the system architecture level.
arXiv Detail & Related papers (2022-02-10T01:15:50Z) - Robustness testing of AI systems: A case study for traffic sign
recognition [13.395753930904108]
This paper presents how the robustness of AI systems can be practically examined and which methods and metrics can be used to do so.
The robustness testing methodology is described and analysed for the example use case of traffic sign recognition in autonomous driving.
arXiv Detail & Related papers (2021-08-13T10:29:09Z) - Artificial Intelligence for IT Operations (AIOPS) Workshop White Paper [50.25428141435537]
Artificial Intelligence for IT Operations (AIOps) is an emerging interdisciplinary field arising in the intersection between machine learning, big data, streaming analytics, and the management of IT operations.
Main aim of the AIOPS workshop is to bring together researchers from both academia and industry to present their experiences, results, and work in progress in this field.
arXiv Detail & Related papers (2021-01-15T10:43:10Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z) - AutoOD: Automated Outlier Detection via Curiosity-guided Search and
Self-imitation Learning [72.99415402575886]
Outlier detection is an important data mining task with numerous practical applications.
We propose AutoOD, an automated outlier detection framework, which aims to search for an optimal neural network model.
Experimental results on various real-world benchmark datasets demonstrate that the deep model identified by AutoOD achieves the best performance.
arXiv Detail & Related papers (2020-06-19T18:57:51Z) - Distributed and Democratized Learning: Philosophy and Research
Challenges [80.39805582015133]
We propose a novel design philosophy called democratized learning (Dem-AI)
Inspired by the societal groups of humans, the specialized groups of learning agents in the proposed Dem-AI system are self-organized in a hierarchical structure to collectively perform learning tasks more efficiently.
We present a reference design as a guideline to realize future Dem-AI systems, inspired by various interdisciplinary fields.
arXiv Detail & Related papers (2020-03-18T08:45:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.