It is not "accuracy vs. explainability" -- we need both for trustworthy
AI systems
- URL: http://arxiv.org/abs/2212.11136v2
- Date: Fri, 23 Dec 2022 22:55:33 GMT
- Title: It is not "accuracy vs. explainability" -- we need both for trustworthy
AI systems
- Authors: D. Petkovic
- Abstract summary: We are witnessing the emergence of an AI economy and society where AI technologies are increasingly impacting health care, business, transportation and many aspects of everyday life.
However, AI systems may produce errors, can exhibit bias, may be sensitive to noise in the data, and often lack technical and judicial transparency resulting in reduction in trust and challenges in their adoption.
These recent shortcomings and concerns have been documented in scientific but also in general press such as accidents with self driving cars, biases in healthcare, hiring and face recognition systems for people of color, seemingly correct medical decisions later found to be made due to wrong reasons etc.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We are witnessing the emergence of an AI economy and society where AI
technologies are increasingly impacting health care, business, transportation
and many aspects of everyday life. Many successes have been reported where AI
systems even surpassed the accuracy of human experts. However, AI systems may
produce errors, can exhibit bias, may be sensitive to noise in the data, and
often lack technical and judicial transparency resulting in reduction in trust
and challenges in their adoption. These recent shortcomings and concerns have
been documented in scientific but also in general press such as accidents with
self driving cars, biases in healthcare, hiring and face recognition systems
for people of color, seemingly correct medical decisions later found to be made
due to wrong reasons etc. This resulted in emergence of many government and
regulatory initiatives requiring trustworthy and ethical AI to provide accuracy
and robustness, some form of explainability, human control and oversight,
elimination of bias, judicial transparency and safety. The challenges in
delivery of trustworthy AI systems motivated intense research on explainable AI
systems (XAI). Aim of XAI is to provide human understandable information of how
AI systems make their decisions. In this paper we first briefly summarize
current XAI work and then challenge the recent arguments of accuracy vs.
explainability for being mutually exclusive and being focused only on deep
learning. We then present our recommendations for the use of XAI in full
lifecycle of high stakes trustworthy AI systems delivery, e.g. development,
validation and certification, and trustworthy production and maintenance.
Related papers
- Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - Trustworthy XAI and Application [0.0]
The article explores XAI, reliable XAI, and several practical uses for reliable XAI.
We go over the three main components-transparency, explainability, and trustworthiness-that we determined are pertinent in this situation.
In the end, trustworthiness is crucial for establishing and maintaining trust between humans and AI systems.
arXiv Detail & Related papers (2024-10-22T16:10:10Z) - Trustworthy and Responsible AI for Human-Centric Autonomous Decision-Making Systems [2.444630714797783]
We review and discuss the intricacies of AI biases, definitions, methods of detection and mitigation, and metrics for evaluating bias.
We also discuss open challenges with regard to the trustworthiness and widespread application of AI across diverse domains of human-centric decision making.
arXiv Detail & Related papers (2024-08-28T06:04:25Z) - Trust in AI: Progress, Challenges, and Future Directions [6.724854390957174]
The increasing use of artificial intelligence (AI) systems in our daily life explains the significance of trust/distrust in AI from a user perspective.
Trust/distrust in AI plays the role of a regulator and could significantly control the level of this diffusion.
arXiv Detail & Related papers (2024-03-12T20:26:49Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Never trust, always verify : a roadmap for Trustworthy AI? [12.031113181911627]
We examine trust in the context of AI-based systems to understand what it means for an AI system to be trustworthy.
We suggest a trust (resp. zero-trust) model for AI and suggest a set of properties that should be satisfied to ensure the trustworthiness of AI systems.
arXiv Detail & Related papers (2022-06-23T21:13:10Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Explainable AI meets Healthcare: A Study on Heart Disease Dataset [0.0]
The aim is to enlighten practitioners on the understandability and interpretability of explainable AI systems using a variety of techniques.
Our paper contains examples based on the heart disease dataset and elucidates on how the explainability techniques should be preferred to create trustworthiness.
arXiv Detail & Related papers (2020-11-06T05:18:43Z) - Effect of Confidence and Explanation on Accuracy and Trust Calibration
in AI-Assisted Decision Making [53.62514158534574]
We study whether features that reveal case-specific model information can calibrate trust and improve the joint performance of the human and AI.
We show that confidence score can help calibrate people's trust in an AI model, but trust calibration alone is not sufficient to improve AI-assisted decision making.
arXiv Detail & Related papers (2020-01-07T15:33:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.