The Limits of AI Data Transparency Policy: Three Disclosure Fallacies
- URL: http://arxiv.org/abs/2601.18127v1
- Date: Mon, 26 Jan 2026 04:14:53 GMT
- Title: The Limits of AI Data Transparency Policy: Three Disclosure Fallacies
- Authors: Judy Hanwen Shen, Ken Liu, Angelina Wang, Sarah H. Cen, Andy K. Zhang, Caroline Meinhardt, Daniel Zhang, Kevin Klyman, Rishi Bommasani, Daniel E. Ho,
- Abstract summary: Data transparency has emerged as a rallying cry for addressing concerns about AI.<n>While these calls are crucial for accountability, current transparency policies often fall short of their intended aims.<n>Similar to nutrition facts for food, policies aimed at nutrition facts for AI currently suffer from a limited consideration of research on effective disclosures.
- Score: 16.486301766411223
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Data transparency has emerged as a rallying cry for addressing concerns about AI: data quality, privacy, and copyright chief among them. Yet while these calls are crucial for accountability, current transparency policies often fall short of their intended aims. Similar to nutrition facts for food, policies aimed at nutrition facts for AI currently suffer from a limited consideration of research on effective disclosures. We offer an institutional perspective and identify three common fallacies in policy implementations of data disclosures for AI. First, many data transparency proposals exhibit a specification gap between the stated goals of data transparency and the actual disclosures necessary to achieve such goals. Second, reform attempts exhibit an enforcement gap between required disclosures on paper and enforcement to ensure compliance in fact. Third, policy proposals manifest an impact gap between disclosed information and meaningful changes in developer practices and public understanding. Informed by the social science on transparency, our analysis identifies affirmative paths for transparency that are effective rather than merely symbolic.
Related papers
- A Confidential Computing Transparency Framework for a Comprehensive Trust Chain [7.9699781371465965]
Confidential Computing enhances privacy of data in-use through hardware-based Trusted Execution Environments.<n>TEEs require user trust, as they cannot guarantee the absence of vulnerabilities or backdoors.<n>We propose a three-level conceptual framework providing organisations with a practical pathway to incrementally improve Confidential Computing transparency.
arXiv Detail & Related papers (2024-09-05T17:24:05Z) - AI data transparency: an exploration through the lens of AI incidents [2.255682336735152]
This research explores the status of public documentation about data practices within AI systems generating public concern.
We highlight a need to develop systematic ways of monitoring AI data transparency that account for the diversity of AI system types.
arXiv Detail & Related papers (2024-09-05T07:23:30Z) - Algorithmic Transparency and Participation through the Handoff Lens: Lessons Learned from the U.S. Census Bureau's Adoption of Differential Privacy [1.999925939110439]
We look at the U.S. Census Bureau's adoption of differential privacy in its updated disclosure avoidance system for the 2020 census.
This case study seeks to expand our understanding of how technical shifts implicate values.
We present three lessons from this case study toward grounding understandings of algorithmic transparency and participation.
arXiv Detail & Related papers (2024-05-29T15:29:16Z) - Legally Binding but Unfair? Towards Assessing Fairness of Privacy Policies [0.0]
Privacy policies are expected to inform data subjects about their data protection rights and explain data management practices.
This implies that a privacy policy is written in a fair way, e.g., it does not use polarizing terms, does not require a certain education, or does not assume a particular social background.
We identify from fundamental legal sources and fairness research, how the dimensions informational fairness, representational fairness and ethics / morality are related to privacy policies.
We propose options to automatically assess policies in these fairness dimensions, based on text statistics, linguistic methods and artificial intelligence.
arXiv Detail & Related papers (2024-03-12T22:53:32Z) - Foundation Model Transparency Reports [61.313836337206894]
We propose Foundation Model Transparency Reports, drawing upon the transparency reporting practices in social media.
We identify 6 design principles given the successes and shortcomings of social media transparency reporting.
Well-designed transparency reports could reduce compliance costs, in part due to overlapping regulatory requirements across different jurisdictions.
arXiv Detail & Related papers (2024-02-26T03:09:06Z) - Users are the North Star for AI Transparency [111.5679109784322]
Despite widespread calls for transparent artificial intelligence systems, the term is too overburdened with disparate meanings to express precise policy aims or to orient concrete lines of research.
Part of why this happens is that a clear ideal of AI transparency goes unsaid in this body of work.
We explicitly name such a north star -- transparency that is user-centered, user-appropriate, and honest.
arXiv Detail & Related papers (2023-03-09T18:53:29Z) - Bridging the Transparency Gap: What Can Explainable AI Learn From the AI
Act? [0.8287206589886881]
European Union has introduced detailed requirements of transparency for AI systems.
There is a fundamental difference between XAI and the Act regarding what transparency is.
By comparing the disparate views of XAI and regulation, we arrive at four axes where practical work could bridge the transparency gap.
arXiv Detail & Related papers (2023-02-21T16:06:48Z) - Exploring the Trade-off between Plausibility, Change Intensity and
Adversarial Power in Counterfactual Explanations using Multi-objective
Optimization [73.89239820192894]
We argue that automated counterfactual generation should regard several aspects of the produced adversarial instances.
We present a novel framework for the generation of counterfactual examples.
arXiv Detail & Related papers (2022-05-20T15:02:53Z) - Achieving Transparency Report Privacy in Linear Time [1.9981375888949475]
We first investigate and demonstrate potential privacy hazards brought on by the deployment of transparency and fairness measures in released ATRs.
We then propose a linear-time optimal-privacy scheme, built upon standard linear fractional programming (LFP) theory, for announcing ATRs.
We quantify the privacy-utility trade-offs induced by our scheme, and analyze the impact of privacy perturbation on fairness measures in ATRs.
arXiv Detail & Related papers (2021-03-31T22:05:10Z) - Trustworthy Transparency by Design [57.67333075002697]
We propose a transparency framework for software design, incorporating research on user trust and experience.
Our framework enables developing software that incorporates transparency in its design.
arXiv Detail & Related papers (2021-03-19T12:34:01Z) - Dimensions of Transparency in NLP Applications [64.16277166331298]
Broader transparency in descriptions of and communication regarding AI systems is widely considered desirable.
Previous work has suggested that a trade-off exists between greater system transparency and user confusion.
arXiv Detail & Related papers (2021-01-02T11:46:17Z) - Trustworthy AI [75.99046162669997]
Brittleness to minor adversarial changes in the input data, ability to explain the decisions, address the bias in their training data, are some of the most prominent limitations.
We propose the tutorial on Trustworthy AI to address six critical issues in enhancing user and public trust in AI systems.
arXiv Detail & Related papers (2020-11-02T20:04:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.