Explainable AI is Responsible AI: How Explainability Creates Trustworthy
and Socially Responsible Artificial Intelligence
- URL: http://arxiv.org/abs/2312.01555v1
- Date: Mon, 4 Dec 2023 00:54:04 GMT
- Title: Explainable AI is Responsible AI: How Explainability Creates Trustworthy
and Socially Responsible Artificial Intelligence
- Authors: Stephanie Baker, Wei Xiang
- Abstract summary: This is the topic of responsible AI, which emphasizes the need to develop trustworthy AI systems.
XAI has been broadly considered as a building block for responsible AI (RAI)
Our findings lead us to conclude that XAI is an essential foundation for every pillar of RAI.
- Score: 9.844540637074836
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Artificial intelligence (AI) has been clearly established as a technology
with the potential to revolutionize fields from healthcare to finance - if
developed and deployed responsibly. This is the topic of responsible AI, which
emphasizes the need to develop trustworthy AI systems that minimize bias,
protect privacy, support security, and enhance transparency and accountability.
Explainable AI (XAI) has been broadly considered as a building block for
responsible AI (RAI), with most of the literature considering it as a solution
for improved transparency. This work proposes that XAI and responsible AI are
significantly more deeply entwined. In this work, we explore state-of-the-art
literature on RAI and XAI technologies. Based on our findings, we demonstrate
that XAI can be utilized to ensure fairness, robustness, privacy, security, and
transparency in a wide range of contexts. Our findings lead us to conclude that
XAI is an essential foundation for every pillar of RAI.
Related papers
- Trustworthy XAI and Application [0.0]
The article explores XAI, reliable XAI, and several practical uses for reliable XAI.
We go over the three main components-transparency, explainability, and trustworthiness-that we determined are pertinent in this situation.
In the end, trustworthiness is crucial for establishing and maintaining trust between humans and AI systems.
arXiv Detail & Related papers (2024-10-22T16:10:10Z) - Dataset | Mindset = Explainable AI | Interpretable AI [36.001670039529586]
"explainable" Artificial Intelligence (XAI)" and "interpretable AI (IAI)" interchangeably when we apply various XAI tools for a given dataset to explain the reasons that underpin machine learning (ML) outputs.
We argue that XAI is a subset of IAI. The concept of IAI is beyond the sphere of a dataset. It includes the domain of a mindset.
We aim to clarify these notions and lay the foundation of XAI, IAI, EAI, and TAI for many practitioners and policymakers in future AI applications and research.
arXiv Detail & Related papers (2024-08-22T14:12:53Z) - False Sense of Security in Explainable Artificial Intelligence (XAI) [3.298597939573779]
We argue that AI regulations and current market conditions threaten effective AI governance and safety.
Unless governments explicitly tackle the issue of explainability through clear legislative and policy statements, AI governance risks becoming a vacuous "box-ticking" exercise.
arXiv Detail & Related papers (2024-05-06T20:02:07Z) - Explainable AI for Safe and Trustworthy Autonomous Driving: A Systematic Review [12.38351931894004]
We present the first systematic literature review of explainable methods for safe and trustworthy autonomous driving.
We identify five key contributions of XAI for safe and trustworthy AI in AD, which are interpretable design, interpretable surrogate models, interpretable monitoring, auxiliary explanations, and interpretable validation.
We propose a modular framework called SafeX to integrate these contributions, enabling explanation delivery to users while simultaneously ensuring the safety of AI models.
arXiv Detail & Related papers (2024-02-08T09:08:44Z) - VerifAI: Verified Generative AI [22.14231506649365]
Generative AI has made significant strides, yet concerns about its accuracy and reliability continue to grow.
We propose that verifying the outputs of generative AI from a data management perspective is an emerging issue for generative AI.
Our vision is to promote the development of verifiable generative AI and contribute to a more trustworthy and responsible use of AI.
arXiv Detail & Related papers (2023-07-06T06:11:51Z) - AI Maintenance: A Robustness Perspective [91.28724422822003]
We introduce highlighted robustness challenges in the AI lifecycle and motivate AI maintenance by making analogies to car maintenance.
We propose an AI model inspection framework to detect and mitigate robustness risks.
Our proposal for AI maintenance facilitates robustness assessment, status tracking, risk scanning, model hardening, and regulation throughout the AI lifecycle.
arXiv Detail & Related papers (2023-01-08T15:02:38Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - A User-Centred Framework for Explainable Artificial Intelligence in
Human-Robot Interaction [70.11080854486953]
We propose a user-centred framework for XAI that focuses on its social-interactive aspect.
The framework aims to provide a structure for interactive XAI solutions thought for non-expert users.
arXiv Detail & Related papers (2021-09-27T09:56:23Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.