The Accountability Paradox: How Platform API Restrictions Undermine AI Transparency Mandates
- URL: http://arxiv.org/abs/2505.11577v1
- Date: Fri, 16 May 2025 14:30:20 GMT
- Title: The Accountability Paradox: How Platform API Restrictions Undermine AI Transparency Mandates
- Authors: FLorian A. D. Burnat, Brittany I. Davidson,
- Abstract summary: API restrictions on major social media platforms challenge compliance with the EU Digital Services Act [20], which mandates data access for algorithmic transparency.<n>We develop a structured audit framework to assess the growing misalignment between regulatory requirements and platform implementations.<n>We propose targeted policy interventions aligned with the AI Risk Management Framework of the National Institute of Standards and Technology.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent application programming interface (API) restrictions on major social media platforms challenge compliance with the EU Digital Services Act [20], which mandates data access for algorithmic transparency. We develop a structured audit framework to assess the growing misalignment between regulatory requirements and platform implementations. Our comparative analysis of X/Twitter, Reddit, TikTok, and Meta identifies critical ``audit blind-spots'' where platform content moderation and algorithmic amplification remain inaccessible to independent verification. Our findings reveal an ``accountability paradox'': as platforms increasingly rely on AI systems, they simultaneously restrict the capacity for independent oversight. We propose targeted policy interventions aligned with the AI Risk Management Framework of the National Institute of Standards and Technology [80], emphasizing federated access models and enhanced regulatory enforcement.
Related papers
- Aegis: Towards Governance, Integrity, and Security of AI Voice Agents [52.7512082818639]
We propose Aegis, a framework for the governance, integrity, and security of voice agents.<n>We evaluate the framework through case studies in banking call centers, IT Support, and logistics.<n>We observe systematic differences across model families, with open-weight models exhibiting higher susceptibility.
arXiv Detail & Related papers (2026-02-07T05:51:36Z) - Beyond the Checkbox: Strengthening DSA Compliance Through Social Media Algorithmic Auditing [7.686169912623941]
Algorithmic platforms are required to comply with obligations concerning algorithmic transparency, user protection and privacy.<n>Little is known about current auditing practices and their effectiveness in ensuring such compliance.<n>Our analysis shows significant inconsistencies in methodologies and lack of technical depth when evaluating AI-powered systems.<n>We propose to employ algorithmic auditing to enhance the depth, scale, and independence of compliance assessments.
arXiv Detail & Related papers (2026-01-26T12:00:29Z) - Compliance as a Trust Metric [1.0264137858888513]
This paper bridges this research gap by operationalizing regulatory compliance as a quantitative and dynamic trust metric.<n>Our contribution is a quantitative model that assesses the severity of each violation along multiple dimensions, including its Volume, Duration, Breadth, and Criticality.<n>We evaluate ACE on a synthetic hospital dataset, demonstrating its ability to accurately detect a range of complex HIPAA and HIPAA violations.
arXiv Detail & Related papers (2026-01-03T21:14:40Z) - An Agentic Software Framework for Data Governance under DPDP [0.0]
In India, the Digital Personal Data Protection Act mandates rigorous data privacy and compliance requirements.<n>From a software development perspective, traditional compliance tools often rely on hard-coded rules and static configurations.<n>This paper introduces a novel agentic framework to embed compliance logic directly into software agents that govern and adapt data policies.
arXiv Detail & Related papers (2026-01-03T07:46:43Z) - Analyzing and Internalizing Complex Policy Documents for LLM Agents [53.14898416858099]
Large Language Model (LLM)-based agentic systems rely on in-context policy documents encoding diverse business rules.<n>This motivates developing internalization methods that embed policy documents into model priors while preserving performance.<n>We introduce CC-Gen, an agentic benchmark generator with Controllable Complexity across four levels.
arXiv Detail & Related papers (2025-10-13T16:30:07Z) - Safe and Certifiable AI Systems: Concepts, Challenges, and Lessons Learned [45.44933002008943]
This white paper presents the T"UV AUSTRIA Trusted AI framework.<n>It is an end-to-end audit catalog and methodology for assessing and certifying machine learning systems.<n>Building on three pillars - Secure Software Development, Functional Requirements, and Ethics & Data Privacy - it translates the high-level obligations of the EU AI Act into specific, testable criteria.
arXiv Detail & Related papers (2025-09-08T17:52:08Z) - PETLP: A Privacy-by-Design Pipeline for Social Media Data in AI Research [2.185322080975722]
PETLP (Privacy-by-design Extract, Transform, Load, and Present) is a compliance framework that embeds legal safeguards directly into extended pipelines.<n>We demonstrate how extraction rights fundamentally differ between qualifying research organisations.<n>We show why true anonymisation remains unachievable for social media data.
arXiv Detail & Related papers (2025-08-12T08:33:40Z) - Explainable AI Systems Must Be Contestable: Here's How to Make It Happen [2.5875936082584623]
This paper presents the first rigorous formal definition of contestability in explainable AI.<n>We introduce a modular framework of by-design and post-hoc mechanisms spanning human-centered interfaces, technical processes, and organizational architectures.<n>Our work equips practitioners with the tools to embed genuine recourse and accountability into AI systems.
arXiv Detail & Related papers (2025-06-02T13:32:05Z) - Ethical AI: Towards Defining a Collective Evaluation Framework [0.3413711585591077]
Artificial Intelligence (AI) is transforming sectors such as healthcare, finance, and autonomous systems.<n>Yet its rapid integration raises urgent ethical concerns related to data ownership, privacy, and systemic bias.<n>This article proposes a modular ethical assessment framework built on ontological blocks of meaning-discrete, interpretable units.
arXiv Detail & Related papers (2025-05-30T21:10:47Z) - Watermarking Without Standards Is Not AI Governance [46.71493672772134]
We argue that current implementations risk serving as symbolic compliance rather than delivering effective oversight.<n>We propose a three-layer framework encompassing technical standards, audit infrastructure, and enforcement mechanisms.
arXiv Detail & Related papers (2025-05-27T18:10:04Z) - Are You Getting What You Pay For? Auditing Model Substitution in LLM APIs [60.881609323604685]
Large Language Models (LLMs) accessed via black-box APIs introduce a trust challenge.<n>Users pay for services based on advertised model capabilities.<n> providers may covertly substitute the specified model with a cheaper, lower-quality alternative to reduce operational costs.<n>This lack of transparency undermines fairness, erodes trust, and complicates reliable benchmarking.
arXiv Detail & Related papers (2025-04-07T03:57:41Z) - In-House Evaluation Is Not Enough: Towards Robust Third-Party Flaw Disclosure for General-Purpose AI [93.33036653316591]
We call for three interventions to advance system safety.<n>First, we propose using standardized AI flaw reports and rules of engagement for researchers.<n>Second, we propose GPAI system providers adopt broadly-scoped flaw disclosure programs.<n>Third, we advocate for the development of improved infrastructure to coordinate distribution of flaw reports.
arXiv Detail & Related papers (2025-03-21T05:09:46Z) - Policy Frameworks for Transparent Chain-of-Thought Reasoning in Large Language Models [1.0088912103548195]
Chain-of-Thought (CoT) reasoning enhances large language models (LLMs) by decomposing complex problems into step-by-step solutions.<n>Current CoT disclosure policies vary widely across different models in visibility, API access, and pricing strategies, lacking a unified policy framework.<n>We propose a tiered-access policy framework that balances transparency, accountability, and security by tailoring CoT availability to academic, business, and general users.
arXiv Detail & Related papers (2025-03-14T19:54:18Z) - Compliance of AI Systems [0.0]
This paper systematically examines the compliance of AI systems with relevant legislation, focusing on the EU's AI Act.<n>The analysis highlighted many challenges associated with edge devices, which are increasingly being used to deploy AI applications closer and closer to the data sources.<n>The importance of data set compliance is highlighted as a cornerstone for ensuring the trustworthiness, transparency, and explainability of AI systems.
arXiv Detail & Related papers (2025-03-07T16:53:36Z) - Semantic Integrity Constraints: Declarative Guardrails for AI-Augmented Data Processing Systems [39.23499993745249]
We introduce semantic integrity constraints (SICs) for specifying and enforcing correctness conditions over LLM outputs in semantic queries.<n>SICs generalize traditional database integrity constraints to semantic settings, supporting common types of constraints, such as grounding, soundness, and exclusion.<n>We present a system design for integrating SICs into query planning and runtime and discuss its realization in AI-augmented DPSs.
arXiv Detail & Related papers (2025-03-01T19:59:25Z) - Peer-induced Fairness: A Causal Approach for Algorithmic Fairness Auditing [0.0]
The European Union's Artificial Intelligence Act takes effect on 1 August 2024.
High-risk AI applications must adhere to stringent transparency and fairness standards.
We propose a novel framework, which combines the strengths of counterfactual fairness and peer comparison strategy.
arXiv Detail & Related papers (2024-08-05T15:35:34Z) - Accountability in Offline Reinforcement Learning: Explaining Decisions
with a Corpus of Examples [70.84093873437425]
This paper introduces the Accountable Offline Controller (AOC) that employs the offline dataset as the Decision Corpus.
AOC operates effectively in low-data scenarios, can be extended to the strictly offline imitation setting, and displays qualities of both conservation and adaptability.
We assess AOC's performance in both simulated and real-world healthcare scenarios, emphasizing its capability to manage offline control tasks with high levels of performance while maintaining accountability.
arXiv Detail & Related papers (2023-10-11T17:20:32Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - Having your Privacy Cake and Eating it Too: Platform-supported Auditing
of Social Media Algorithms for Public Interest [70.02478301291264]
Social media platforms curate access to information and opportunities, and so play a critical role in shaping public discourse.
Prior studies have used black-box methods to show that these algorithms can lead to biased or discriminatory outcomes.
We propose a new method for platform-supported auditing that can meet the goals of the proposed legislation.
arXiv Detail & Related papers (2022-07-18T17:32:35Z) - An Automatic Attribute Based Access Control Policy Extraction from
Access Logs [5.142415132534397]
An attribute-based access control (ABAC) model provides a more flexible approach for addressing the authorization needs of complex and dynamic systems.
We present a methodology for automatically learning ABAC policy rules from access logs of a system to simplify the policy development process.
arXiv Detail & Related papers (2020-03-16T15:08:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.