Dimensions of Transparency in NLP Applications
- URL: http://arxiv.org/abs/2101.00433v1
- Date: Sat, 2 Jan 2021 11:46:17 GMT
- Title: Dimensions of Transparency in NLP Applications
- Authors: Michael Saxon, Sharon Levy, Xinyi Wang, Alon Albalak, William Yang
Wang
- Abstract summary: Broader transparency in descriptions of and communication regarding AI systems is widely considered desirable.
Previous work has suggested that a trade-off exists between greater system transparency and user confusion.
- Score: 64.16277166331298
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Broader transparency in descriptions of and communication regarding AI
systems is widely considered desirable. This is particularly the case in
discussions of fairness and accountability in systems exposed to the general
public. However, previous work has suggested that a trade-off exists between
greater system transparency and user confusion, where `too much information'
clouds a reader's understanding of what a system description means.
Unfortunately, transparency is a nebulous concept, difficult to both define and
quantify. In this work we address these two issues by proposing a framework for
quantifying transparency in system descriptions and apply it to analyze the
trade-off between transparency and end-user confusion using NLP conference
abstracts.
Related papers
- AI data transparency: an exploration through the lens of AI incidents [2.255682336735152]
This research explores the status of public documentation about data practices within AI systems generating public concern.
We highlight a need to develop systematic ways of monitoring AI data transparency that account for the diversity of AI system types.
arXiv Detail & Related papers (2024-09-05T07:23:30Z) - Algorithmic Transparency and Participation through the Handoff Lens: Lessons Learned from the U.S. Census Bureau's Adoption of Differential Privacy [1.999925939110439]
We look at the U.S. Census Bureau's adoption of differential privacy in its updated disclosure avoidance system for the 2020 census.
This case study seeks to expand our understanding of how technical shifts implicate values.
We present three lessons from this case study toward grounding understandings of algorithmic transparency and participation.
arXiv Detail & Related papers (2024-05-29T15:29:16Z) - Through the Looking-Glass: Transparency Implications and Challenges in
Enterprise AI Knowledge Systems [3.7640559288894524]
We present the looking-glass metaphor and use it to conceptualize AI knowledge systems as systems that reflect and distort.
We identify three transparency dimensions necessary to realize the value of AI knowledge systems, namely system transparency, procedural transparency and transparency of outcomes.
arXiv Detail & Related papers (2024-01-17T18:47:30Z) - Users are the North Star for AI Transparency [111.5679109784322]
Despite widespread calls for transparent artificial intelligence systems, the term is too overburdened with disparate meanings to express precise policy aims or to orient concrete lines of research.
Part of why this happens is that a clear ideal of AI transparency goes unsaid in this body of work.
We explicitly name such a north star -- transparency that is user-centered, user-appropriate, and honest.
arXiv Detail & Related papers (2023-03-09T18:53:29Z) - Bridging the Transparency Gap: What Can Explainable AI Learn From the AI
Act? [0.8287206589886881]
European Union has introduced detailed requirements of transparency for AI systems.
There is a fundamental difference between XAI and the Act regarding what transparency is.
By comparing the disparate views of XAI and regulation, we arrive at four axes where practical work could bridge the transparency gap.
arXiv Detail & Related papers (2023-02-21T16:06:48Z) - FAT Forensics: A Python Toolbox for Implementing and Deploying Fairness,
Accountability and Transparency Algorithms in Predictive Systems [69.24490096929709]
We developed an open source Python package called FAT Forensics.
It can inspect important fairness, accountability and transparency aspects of predictive algorithms.
Our toolbox can evaluate all elements of a predictive pipeline.
arXiv Detail & Related papers (2022-09-08T13:25:02Z) - "Why Here and Not There?" -- Diverse Contrasting Explanations of
Dimensionality Reduction [75.97774982432976]
We introduce the concept of contrasting explanations for dimensionality reduction.
We apply a realization of this concept to the specific application of explaining two dimensional data visualization.
arXiv Detail & Related papers (2022-06-15T08:54:39Z) - Trustworthy Transparency by Design [57.67333075002697]
We propose a transparency framework for software design, incorporating research on user trust and experience.
Our framework enables developing software that incorporates transparency in its design.
arXiv Detail & Related papers (2021-03-19T12:34:01Z) - Uncertainty as a Form of Transparency: Measuring, Communicating, and
Using Uncertainty [66.17147341354577]
We argue for considering a complementary form of transparency by estimating and communicating the uncertainty associated with model predictions.
We describe how uncertainty can be used to mitigate model unfairness, augment decision-making, and build trustworthy systems.
This work constitutes an interdisciplinary review drawn from literature spanning machine learning, visualization/HCI, design, decision-making, and fairness.
arXiv Detail & Related papers (2020-11-15T17:26:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.