A Survey on AI Assurance
- URL: http://arxiv.org/abs/2111.07505v1
- Date: Mon, 15 Nov 2021 02:45:34 GMT
- Title: A Survey on AI Assurance
- Authors: Feras A. Batarseh, and Laura Freeman
- Abstract summary: An important notion for the adoption of AI algorithms into operational decision process is the concept of assurance.
This manuscript provides a systematic review of research works that are relevant to AI assurance between years 1985 - 2021.
A new AI assurance definition is adopted and presented and assurance methods are contrasted and tabulated.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Artificial Intelligence (AI) algorithms are increasingly providing decision
making and operational support across multiple domains. AI includes a wide
library of algorithms for different problems. One important notion for the
adoption of AI algorithms into operational decision process is the concept of
assurance. The literature on assurance, unfortunately, conceals its outcomes
within a tangled landscape of conflicting approaches, driven by contradicting
motivations, assumptions, and intuitions. Accordingly, albeit a rising and
novel area, this manuscript provides a systematic review of research works that
are relevant to AI assurance, between years 1985 - 2021, and aims to provide a
structured alternative to the landscape. A new AI assurance definition is
adopted and presented and assurance methods are contrasted and tabulated.
Additionally, a ten-metric scoring system is developed and introduced to
evaluate and compare existing methods. Lastly, in this manuscript, we provide
foundational insights, discussions, future directions, a roadmap, and
applicable recommendations for the development and deployment of AI assurance.
Related papers
- A Decision-driven Methodology for Designing Uncertainty-aware AI Self-Assessment [8.482630532500057]
It is unclear if a given AI system's predictions can be trusted by decision-makers in downstream applications.
This manuscript is a practical guide for machine learning engineers and AI system users to select the ideal self-assessment techniques.
arXiv Detail & Related papers (2024-08-02T14:43:45Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Towards Bidirectional Human-AI Alignment: A Systematic Review for Clarifications, Framework, and Future Directions [101.67121669727354]
Recent advancements in AI have highlighted the importance of guiding AI systems towards the intended goals, ethical principles, and values of individuals and groups, a concept broadly recognized as alignment.
The lack of clarified definitions and scopes of human-AI alignment poses a significant obstacle, hampering collaborative efforts across research domains to achieve this alignment.
We introduce a systematic review of over 400 papers published between 2019 and January 2024, spanning multiple domains such as Human-Computer Interaction (HCI), Natural Language Processing (NLP), Machine Learning (ML)
arXiv Detail & Related papers (2024-06-13T16:03:25Z) - Participatory Approaches in AI Development and Governance: Case Studies [9.824305892501686]
This paper forms the second of a two-part series on the value of a participatory approach to AI development and deployment.
The first paper had crafted a principled, as well as pragmatic, justification for deploying participatory methods in these two exercises.
This paper will test these preliminary conclusions in two sectors, the use of facial recognition technology in the upkeep of law and order and the use of large language models in the healthcare sector.
arXiv Detail & Related papers (2024-06-03T10:10:23Z) - Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
General purpose AI seems to have lowered the barriers for the public to use AI and harness its power.
We introduce PARTICIP-AI, a framework for laypeople to speculate and assess AI use cases and their impacts.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - Explainable AI for Safe and Trustworthy Autonomous Driving: A Systematic Review [12.38351931894004]
We present the first systematic literature review of explainable methods for safe and trustworthy autonomous driving.
We identify five key contributions of XAI for safe and trustworthy AI in AD, which are interpretable design, interpretable surrogate models, interpretable monitoring, auxiliary explanations, and interpretable validation.
We propose a modular framework called SafeX to integrate these contributions, enabling explanation delivery to users while simultaneously ensuring the safety of AI models.
arXiv Detail & Related papers (2024-02-08T09:08:44Z) - Trustworthy AI: Deciding What to Decide [41.10597843436572]
We propose a novel framework of Trustworthy AI (TAI) encompassing crucial components of AI.
We aim to use this framework to conduct the TAI experiments by quantitive and qualitative research methods.
We formulate an optimal prediction model for applying the strategic investment decision of credit default swaps (CDS) in the technology sector.
arXiv Detail & Related papers (2023-11-21T13:43:58Z) - The Role of AI in Drug Discovery: Challenges, Opportunities, and
Strategies [97.5153823429076]
The benefits, challenges and drawbacks of AI in this field are reviewed.
The use of data augmentation, explainable AI, and the integration of AI with traditional experimental methods are also discussed.
arXiv Detail & Related papers (2022-12-08T23:23:39Z) - Connecting Algorithmic Research and Usage Contexts: A Perspective of
Contextualized Evaluation for Explainable AI [65.44737844681256]
A lack of consensus on how to evaluate explainable AI (XAI) hinders the advancement of the field.
We argue that one way to close the gap is to develop evaluation methods that account for different user requirements.
arXiv Detail & Related papers (2022-06-22T05:17:33Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z) - Physically Unclonable Functions and AI: Two Decades of Marriage [7.601937548486356]
The main focus here is to explore the methods borrowed from AI to assess the security of a hardware primitive.
By reviewing PUFs designed by applying AI techniques, we give insight into future research directions.
arXiv Detail & Related papers (2020-08-26T02:53:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.