TAPAS: A Pattern-Based Approach to Assessing Government Transparency
- URL: http://arxiv.org/abs/2505.16413v1
- Date: Thu, 22 May 2025 09:01:42 GMT
- Title: TAPAS: A Pattern-Based Approach to Assessing Government Transparency
- Authors: Jos Zuijderwijk, Iris Beerepoot, Thomas Martens, Eva Knies, Tanja van der Lippe, Hajo A. Reijers,
- Abstract summary: We present the Transparency Anti-Pattern Assessment System (TAPAS)<n>TAPAS is a data-driven methodology designed to evaluate government transparency through the identification of behavioral patterns that impede transparency.<n>We show that TAPAS enables continuous monitoring and provides actionable insights without requiring significant resource investments.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Government transparency, widely recognized as a cornerstone of open government, depends on robust information management practices. Yet effective assessment of information management remains challenging, as existing methods fail to consider the actual working behavior of civil servants and are resource-intensive. Using a design science research approach, we present the Transparency Anti-Pattern Assessment System (TAPAS) -- a novel, data-driven methodology designed to evaluate government transparency through the identification of behavioral patterns that impede transparency. We demonstrate TAPAS's real-world applicability at a Dutch ministry, analyzing their electronic document management system data from the past two decades. We identify eight transparency anti-patterns grouped into four categories: Incomplete Documentation, Limited Accessibility, Unclear Information, and Delayed Documentation. We show that TAPAS enables continuous monitoring and provides actionable insights without requiring significant resource investments.
Related papers
- DATABench: Evaluating Dataset Auditing in Deep Learning from an Adversarial Perspective [59.66984417026933]
We introduce a novel taxonomy, classifying existing methods based on their reliance on internal features (IF) (inherent to the data) versus external features (EF) (artificially introduced for auditing)<n>We formulate two primary attack types: evasion attacks, designed to conceal the use of a dataset, and forgery attacks, intending to falsely implicate an unused dataset.<n>Building on the understanding of existing methods and attack objectives, we further propose systematic attack strategies: decoupling, removal, and detection for evasion; adversarial example-based methods for forgery.<n>Our benchmark, DATABench, comprises 17 evasion attacks, 5 forgery attacks, and 9
arXiv Detail & Related papers (2025-07-08T03:07:15Z) - Improving Regulatory Oversight in Online Content Moderation [2.1082552608122542]
The European Union introduced the Digital Services Act (DSA) to address the risks associated with digital platforms and promote a safer online environment.<n>Despite the potential of components such as the Transparency Database, Transparency Reports, and Article 40 of the DSA to improve platform transparency, significant challenges remain.<n>These include data inconsistencies and a lack of detailed information, which hinder transparency in content moderation practices.
arXiv Detail & Related papers (2025-06-04T16:38:25Z) - Word-level Annotation of GDPR Transparency Compliance in Privacy Policies using Large Language Models [0.0]
We introduce a large language model (LLM)-based framework for wordlevel transparency compliance annotation.<n>This pipeline enables systematic identification and fine-grained annotation of transparency-related content in privacy policies.<n>We conduct comparative analysis of eight high-profile LLMs, providing insights into their effectiveness in identifying transparency disclosures.
arXiv Detail & Related papers (2025-03-13T11:41:25Z) - Balancing Confidentiality and Transparency for Blockchain-based Process-Aware Information Systems [46.404531555921906]
We propose an architecture for blockchain-based PAISs aimed at preserving both confidentiality and transparency.<n>Smart contracts enact, enforce and store public interactions, while attribute-based encryption techniques are adopted to specify access grants to confidential information.
arXiv Detail & Related papers (2024-12-07T20:18:36Z) - A Confidential Computing Transparency Framework for a Comprehensive Trust Chain [7.9699781371465965]
Confidential Computing enhances privacy of data in-use through hardware-based Trusted Execution Environments.<n>TEEs require user trust, as they cannot guarantee the absence of vulnerabilities or backdoors.<n>We propose a three-level conceptual framework providing organisations with a practical pathway to incrementally improve Confidential Computing transparency.
arXiv Detail & Related papers (2024-09-05T17:24:05Z) - AI data transparency: an exploration through the lens of AI incidents [2.255682336735152]
This research explores the status of public documentation about data practices within AI systems generating public concern.
We highlight a need to develop systematic ways of monitoring AI data transparency that account for the diversity of AI system types.
arXiv Detail & Related papers (2024-09-05T07:23:30Z) - Algorithmic Transparency and Participation through the Handoff Lens: Lessons Learned from the U.S. Census Bureau's Adoption of Differential Privacy [1.999925939110439]
We look at the U.S. Census Bureau's adoption of differential privacy in its updated disclosure avoidance system for the 2020 census.
This case study seeks to expand our understanding of how technical shifts implicate values.
We present three lessons from this case study toward grounding understandings of algorithmic transparency and participation.
arXiv Detail & Related papers (2024-05-29T15:29:16Z) - Foundation Model Transparency Reports [61.313836337206894]
We propose Foundation Model Transparency Reports, drawing upon the transparency reporting practices in social media.
We identify 6 design principles given the successes and shortcomings of social media transparency reporting.
Well-designed transparency reports could reduce compliance costs, in part due to overlapping regulatory requirements across different jurisdictions.
arXiv Detail & Related papers (2024-02-26T03:09:06Z) - Rethinking People Analytics With Inverse Transparency by Design [57.67333075002697]
We propose a new design approach for workforce analytics we refer to as inverse transparency by design.
We find that architectural changes are made without inhibiting core functionality.
We conclude that inverse transparency by design is a promising approach to realize accepted and responsible people analytics.
arXiv Detail & Related papers (2023-05-16T21:37:35Z) - Trustworthy Transparency by Design [57.67333075002697]
We propose a transparency framework for software design, incorporating research on user trust and experience.
Our framework enables developing software that incorporates transparency in its design.
arXiv Detail & Related papers (2021-03-19T12:34:01Z) - Explainable Patterns: Going from Findings to Insights to Support Data
Analytics Democratization [60.18814584837969]
We present Explainable Patterns (ExPatt), a new framework to support lay users in exploring and creating data storytellings.
ExPatt automatically generates plausible explanations for observed or selected findings using an external (textual) source of information.
arXiv Detail & Related papers (2021-01-19T16:13:44Z) - Beyond Privacy Trade-offs with Structured Transparency [3.5087540566347513]
We argue that many of these concerns reduce to 'the copy problem'
We find that while the copy problem is not solvable, aspects of these amplifying problems have been addressed in a variety of disconnected fields.
We propose a five-part framework which groups these efforts into specific capabilities and offers a foundation for their integration into an overarching vision we call "structured transparency"
arXiv Detail & Related papers (2020-12-15T15:03:25Z) - Uncertainty as a Form of Transparency: Measuring, Communicating, and
Using Uncertainty [66.17147341354577]
We argue for considering a complementary form of transparency by estimating and communicating the uncertainty associated with model predictions.
We describe how uncertainty can be used to mitigate model unfairness, augment decision-making, and build trustworthy systems.
This work constitutes an interdisciplinary review drawn from literature spanning machine learning, visualization/HCI, design, decision-making, and fairness.
arXiv Detail & Related papers (2020-11-15T17:26:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.