Bridging the Transparency Gap: What Can Explainable AI Learn From the AI
Act?
- URL: http://arxiv.org/abs/2302.10766v5
- Date: Sat, 29 Jul 2023 21:28:54 GMT
- Title: Bridging the Transparency Gap: What Can Explainable AI Learn From the AI
Act?
- Authors: Balint Gyevnar, Nick Ferguson, Burkhard Schafer
- Abstract summary: European Union has introduced detailed requirements of transparency for AI systems.
There is a fundamental difference between XAI and the Act regarding what transparency is.
By comparing the disparate views of XAI and regulation, we arrive at four axes where practical work could bridge the transparency gap.
- Score: 0.8287206589886881
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The European Union has proposed the Artificial Intelligence Act which
introduces detailed requirements of transparency for AI systems. Many of these
requirements can be addressed by the field of explainable AI (XAI), however,
there is a fundamental difference between XAI and the Act regarding what
transparency is. The Act views transparency as a means that supports wider
values, such as accountability, human rights, and sustainable innovation. In
contrast, XAI views transparency narrowly as an end in itself, focusing on
explaining complex algorithmic properties without considering the
socio-technical context. We call this difference the ``transparency gap''.
Failing to address the transparency gap, XAI risks leaving a range of
transparency issues unaddressed. To begin to bridge this gap, we overview and
clarify the terminology of how XAI and European regulation -- the Act and the
related General Data Protection Regulation (GDPR) -- view basic definitions of
transparency. By comparing the disparate views of XAI and regulation, we arrive
at four axes where practical work could bridge the transparency gap: defining
the scope of transparency, clarifying the legal status of XAI, addressing
issues with conformity assessment, and building explainability for datasets.
Related papers
- Algorithmic Transparency and Participation through the Handoff Lens: Lessons Learned from the U.S. Census Bureau's Adoption of Differential Privacy [1.999925939110439]
We look at the U.S. Census Bureau's adoption of differential privacy in its updated disclosure avoidance system for the 2020 census.
This case study seeks to expand our understanding of how technical shifts implicate values.
We present three lessons from this case study toward grounding understandings of algorithmic transparency and participation.
arXiv Detail & Related papers (2024-05-29T15:29:16Z) - False Sense of Security in Explainable Artificial Intelligence (XAI) [3.298597939573779]
We argue that AI regulations and current market conditions threaten effective AI governance and safety.
Unless governments explicitly tackle the issue of explainability through clear legislative and policy statements, AI governance risks becoming a vacuous "box-ticking" exercise.
arXiv Detail & Related papers (2024-05-06T20:02:07Z) - Foundation Model Transparency Reports [61.313836337206894]
We propose Foundation Model Transparency Reports, drawing upon the transparency reporting practices in social media.
We identify 6 design principles given the successes and shortcomings of social media transparency reporting.
Well-designed transparency reports could reduce compliance costs, in part due to overlapping regulatory requirements across different jurisdictions.
arXiv Detail & Related papers (2024-02-26T03:09:06Z) - Explainable AI is Responsible AI: How Explainability Creates Trustworthy
and Socially Responsible Artificial Intelligence [9.844540637074836]
This is the topic of responsible AI, which emphasizes the need to develop trustworthy AI systems.
XAI has been broadly considered as a building block for responsible AI (RAI)
Our findings lead us to conclude that XAI is an essential foundation for every pillar of RAI.
arXiv Detail & Related papers (2023-12-04T00:54:04Z) - Users are the North Star for AI Transparency [111.5679109784322]
Despite widespread calls for transparent artificial intelligence systems, the term is too overburdened with disparate meanings to express precise policy aims or to orient concrete lines of research.
Part of why this happens is that a clear ideal of AI transparency goes unsaid in this body of work.
We explicitly name such a north star -- transparency that is user-centered, user-appropriate, and honest.
arXiv Detail & Related papers (2023-03-09T18:53:29Z) - Think About the Stakeholders First! Towards an Algorithmic Transparency
Playbook for Regulatory Compliance [14.043062659347427]
Laws are being proposed and passed by governments around the world to regulate Artificial Intelligence (AI) systems implemented into the public and private sectors.
Many of these regulations address the transparency of AI systems, and related citizen-aware issues like allowing individuals to have the right to an explanation about how an AI system makes a decision that impacts them.
We propose a novel stakeholder-first approach that assists technologists in designing transparent, regulatory compliant systems.
arXiv Detail & Related papers (2022-06-10T09:39:00Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - Trustworthy Transparency by Design [57.67333075002697]
We propose a transparency framework for software design, incorporating research on user trust and experience.
Our framework enables developing software that incorporates transparency in its design.
arXiv Detail & Related papers (2021-03-19T12:34:01Z) - Dimensions of Transparency in NLP Applications [64.16277166331298]
Broader transparency in descriptions of and communication regarding AI systems is widely considered desirable.
Previous work has suggested that a trade-off exists between greater system transparency and user confusion.
arXiv Detail & Related papers (2021-01-02T11:46:17Z) - Uncertainty as a Form of Transparency: Measuring, Communicating, and
Using Uncertainty [66.17147341354577]
We argue for considering a complementary form of transparency by estimating and communicating the uncertainty associated with model predictions.
We describe how uncertainty can be used to mitigate model unfairness, augment decision-making, and build trustworthy systems.
This work constitutes an interdisciplinary review drawn from literature spanning machine learning, visualization/HCI, design, decision-making, and fairness.
arXiv Detail & Related papers (2020-11-15T17:26:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.