Algorithmic Transparency and Participation through the Handoff Lens: Lessons Learned from the U.S. Census Bureau's Adoption of Differential Privacy
- URL: http://arxiv.org/abs/2405.19187v1
- Date: Wed, 29 May 2024 15:29:16 GMT
- Title: Algorithmic Transparency and Participation through the Handoff Lens: Lessons Learned from the U.S. Census Bureau's Adoption of Differential Privacy
- Authors: Amina A. Abdu, Lauren M. Chambers, Deirdre K. Mulligan, Abigail Z. Jacobs,
- Abstract summary: We look at the U.S. Census Bureau's adoption of differential privacy in its updated disclosure avoidance system for the 2020 census.
This case study seeks to expand our understanding of how technical shifts implicate values.
We present three lessons from this case study toward grounding understandings of algorithmic transparency and participation.
- Score: 1.999925939110439
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Emerging discussions on the responsible government use of algorithmic technologies propose transparency and public participation as key mechanisms for preserving accountability and trust. But in practice, the adoption and use of any technology shifts the social, organizational, and political context in which it is embedded. Therefore translating transparency and participation efforts into meaningful, effective accountability must take into account these shifts. We adopt two theoretical frames, Mulligan and Nissenbaum's handoff model and Star and Griesemer's boundary objects, to reveal such shifts during the U.S. Census Bureau's adoption of differential privacy (DP) in its updated disclosure avoidance system (DAS) for the 2020 census. This update preserved (and arguably strengthened) the confidentiality protections that the Bureau is mandated to uphold, and the Bureau engaged in a range of activities to facilitate public understanding of and participation in the system design process. Using publicly available documents concerning the Census' implementation of DP, this case study seeks to expand our understanding of how technical shifts implicate values, how such shifts can afford (or fail to afford) greater transparency and participation in system design, and the importance of localized expertise throughout. We present three lessons from this case study toward grounding understandings of algorithmic transparency and participation: (1) efforts towards transparency and participation in algorithmic governance must center values and policy decisions, not just technical design decisions; (2) the handoff model is a useful tool for revealing how such values may be cloaked beneath technical decisions; and (3) boundary objects alone cannot bridge distant communities without trusted experts traveling alongside to broker their adoption.
Related papers
- The Pitfalls of "Security by Obscurity" And What They Mean for Transparent AI [4.627627425427264]
We identify three key themes in the security community's perspective on the benefits of transparency.
We then provide a case study discussion on how transparency has shaped the research subfield of anonymization.
arXiv Detail & Related papers (2025-01-30T17:04:35Z) - Making Transparency Advocates: An Educational Approach Towards Better Algorithmic Transparency in Practice [10.466781527359698]
Concerns about the risks and harms posed by artificial intelligence (AI) have resulted in significant study into algorithmic transparency.
Despite a decade of development in XAI, progress in research has not been fully translated into the implementation of algorithmic transparency.
We test an approach for addressing the challenge by creating transparency advocates.
arXiv Detail & Related papers (2024-12-19T19:49:59Z) - Balancing Confidentiality and Transparency for Blockchain-based Process-Aware Information Systems [46.404531555921906]
We propose an architecture for blockchain-based PAISs aimed at preserving both confidentiality and transparency.
Smart contracts enact, enforce and store public interactions, while attribute-based encryption techniques are adopted to specify access grants to confidential information.
arXiv Detail & Related papers (2024-12-07T20:18:36Z) - A Confidential Computing Transparency Framework for a Comprehensive Trust Chain [7.9699781371465965]
Confidential Computing enhances privacy of data in-use through hardware-based Trusted Execution Environments.
TEEs require user trust, as they cannot guarantee the absence of vulnerabilities or backdoors.
We propose a three-level conceptual framework providing organisations with a practical pathway to incrementally improve Confidential Computing transparency.
arXiv Detail & Related papers (2024-09-05T17:24:05Z) - Through the Looking-Glass: Transparency Implications and Challenges in Enterprise AI Knowledge Systems [3.9143193313607085]
We present a reflective analysis of transparency requirements and impacts in AI knowledge systems.
We formulate transparency as a key mediator in shaping different ways of seeing.
We identify three transparency dimensions necessary to realize the value of AI knowledge systems.
arXiv Detail & Related papers (2024-01-17T18:47:30Z) - Explaining by Imitating: Understanding Decisions by Interpretable Policy
Learning [72.80902932543474]
Understanding human behavior from observed data is critical for transparency and accountability in decision-making.
Consider real-world settings such as healthcare, in which modeling a decision-maker's policy is challenging.
We propose a data-driven representation of decision-making behavior that inheres transparency by design, accommodates partial observability, and operates completely offline.
arXiv Detail & Related papers (2023-10-28T13:06:14Z) - Users are the North Star for AI Transparency [111.5679109784322]
Despite widespread calls for transparent artificial intelligence systems, the term is too overburdened with disparate meanings to express precise policy aims or to orient concrete lines of research.
Part of why this happens is that a clear ideal of AI transparency goes unsaid in this body of work.
We explicitly name such a north star -- transparency that is user-centered, user-appropriate, and honest.
arXiv Detail & Related papers (2023-03-09T18:53:29Z) - Trustworthy Transparency by Design [57.67333075002697]
We propose a transparency framework for software design, incorporating research on user trust and experience.
Our framework enables developing software that incorporates transparency in its design.
arXiv Detail & Related papers (2021-03-19T12:34:01Z) - Dimensions of Transparency in NLP Applications [64.16277166331298]
Broader transparency in descriptions of and communication regarding AI systems is widely considered desirable.
Previous work has suggested that a trade-off exists between greater system transparency and user confusion.
arXiv Detail & Related papers (2021-01-02T11:46:17Z) - Uncertainty as a Form of Transparency: Measuring, Communicating, and
Using Uncertainty [66.17147341354577]
We argue for considering a complementary form of transparency by estimating and communicating the uncertainty associated with model predictions.
We describe how uncertainty can be used to mitigate model unfairness, augment decision-making, and build trustworthy systems.
This work constitutes an interdisciplinary review drawn from literature spanning machine learning, visualization/HCI, design, decision-making, and fairness.
arXiv Detail & Related papers (2020-11-15T17:26:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.