Cybertrust: From Explainable to Actionable and Interpretable AI (AI2)
- URL: http://arxiv.org/abs/2201.11117v1
- Date: Wed, 26 Jan 2022 18:53:09 GMT
- Title: Cybertrust: From Explainable to Actionable and Interpretable AI (AI2)
- Authors: Stephanie Galaitsi, Benjamin D. Trump, Jeffrey M. Keisler, Igor
Linkov, Alexander Kott
- Abstract summary: Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
- Score: 58.981120701284816
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: To benefit from AI advances, users and operators of AI systems must have
reason to trust it. Trust arises from multiple interactions, where predictable
and desirable behavior is reinforced over time. Providing the system's users
with some understanding of AI operations can support predictability, but
forcing AI to explain itself risks constraining AI capabilities to only those
reconcilable with human cognition. We argue that AI systems should be designed
with features that build trust by bringing decision-analytic perspectives and
formal tools into AI. Instead of trying to achieve explainable AI, we should
develop interpretable and actionable AI. Actionable and Interpretable AI (AI2)
will incorporate explicit quantifications and visualizations of user confidence
in AI recommendations. In doing so, it will allow examining and testing of AI
system predictions to establish a basis for trust in the systems' decision
making and ensure broad benefits from deploying and advancing its computational
capabilities.
Related papers
- Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Never trust, always verify : a roadmap for Trustworthy AI? [12.031113181911627]
We examine trust in the context of AI-based systems to understand what it means for an AI system to be trustworthy.
We suggest a trust (resp. zero-trust) model for AI and suggest a set of properties that should be satisfied to ensure the trustworthiness of AI systems.
arXiv Detail & Related papers (2022-06-23T21:13:10Z) - Confident AI [0.0]
We propose "Confident AI" as a means to designing Artificial Intelligence (AI) and Machine Learning (ML) systems with both algorithm and user confidence in model predictions and reported results.
The 4 basic tenets of Confident AI are Repeatability, Believability, Sufficiency, and Adaptability.
arXiv Detail & Related papers (2022-02-12T02:26:46Z) - Structured access to AI capabilities: an emerging paradigm for safe AI
deployment [0.0]
Instead of openly disseminating AI systems, developers facilitate controlled, arm's length interactions with their AI systems.
Aim is to prevent dangerous AI capabilities from being widely accessible, whilst preserving access to AI capabilities that can be used safely.
arXiv Detail & Related papers (2022-01-13T19:30:16Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Know Your Model (KYM): Increasing Trust in AI and Machine Learning [4.93786553432578]
We analyze each element of trustworthiness and provide a set of 20 guidelines that can be leveraged to ensure optimal AI functionality.
The guidelines help ensure that trustworthiness is provable and can be demonstrated, they are implementation agnostic, and they can be applied to any AI system in any sector.
arXiv Detail & Related papers (2021-05-31T14:08:22Z) - Does Explainable Artificial Intelligence Improve Human Decision-Making? [17.18994675838646]
We compare and evaluate objective human decision accuracy without AI (control), with an AI prediction (no explanation) and AI prediction with explanation.
We find any kind of AI prediction tends to improve user decision accuracy, but no conclusive evidence that explainable AI has a meaningful impact.
Our results indicate that, at least in some situations, the "why" information provided in explainable AI may not enhance user decision-making.
arXiv Detail & Related papers (2020-06-19T15:46:13Z) - Effect of Confidence and Explanation on Accuracy and Trust Calibration
in AI-Assisted Decision Making [53.62514158534574]
We study whether features that reveal case-specific model information can calibrate trust and improve the joint performance of the human and AI.
We show that confidence score can help calibrate people's trust in an AI model, but trust calibration alone is not sufficient to improve AI-assisted decision making.
arXiv Detail & Related papers (2020-01-07T15:33:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.