Studying Up Public Sector AI: How Networks of Power Relations Shape Agency Decisions Around AI Design and Use
- URL: http://arxiv.org/abs/2405.12458v1
- Date: Tue, 21 May 2024 02:31:26 GMT
- Title: Studying Up Public Sector AI: How Networks of Power Relations Shape Agency Decisions Around AI Design and Use
- Authors: Anna Kawakami, Amanda Coston, Hoda Heidari, Kenneth Holstein, Haiyi Zhu,
- Abstract summary: We study public sector AI around those who have the power and responsibility to make decisions about the role that AI tools will play in their agency.
Our findings shed light on how infrastructural, legal, and social factors create barriers and disincentives to the involvement of a broader range of stakeholders in decisions about AI design and adoption.
- Score: 29.52245155918532
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: As public sector agencies rapidly introduce new AI tools in high-stakes domains like social services, it becomes critical to understand how decisions to adopt these tools are made in practice. We borrow from the anthropological practice to ``study up'' those in positions of power, and reorient our study of public sector AI around those who have the power and responsibility to make decisions about the role that AI tools will play in their agency. Through semi-structured interviews and design activities with 16 agency decision-makers, we examine how decisions about AI design and adoption are influenced by their interactions with and assumptions about other actors within these agencies (e.g., frontline workers and agency leaders), as well as those above (legal systems and contracted companies), and below (impacted communities). By centering these networks of power relations, our findings shed light on how infrastructural, legal, and social factors create barriers and disincentives to the involvement of a broader range of stakeholders in decisions about AI design and adoption. Agency decision-makers desired more practical support for stakeholder involvement around public sector AI to help overcome the knowledge and power differentials they perceived between them and other stakeholders (e.g., frontline workers and impacted community members). Building on these findings, we discuss implications for future research and policy around actualizing participatory AI approaches in public sector contexts.
Related papers
- Strategic AI Governance: Insights from Leading Nations [0.0]
Artificial Intelligence (AI) has the potential to revolutionize various sectors, yet its adoption is often hindered by concerns about data privacy, security, and the understanding of AI capabilities.
This paper synthesizes AI governance approaches, strategic themes, and enablers and challenges for AI adoption by reviewing national AI strategies from leading nations.
arXiv Detail & Related papers (2024-09-16T06:00:42Z) - Do Responsible AI Artifacts Advance Stakeholder Goals? Four Key Barriers Perceived by Legal and Civil Stakeholders [59.17981603969404]
The responsible AI (RAI) community has introduced numerous processes and artifacts to facilitate transparency and support the governance of AI systems.
We conduct semi-structured interviews with 19 government, legal, and civil society stakeholders who inform policy and advocacy around responsible AI efforts.
We organize these beliefs into four barriers that help explain how RAI artifacts may (inadvertently) reconfigure power relations across civil society, government, and industry.
arXiv Detail & Related papers (2024-08-22T00:14:37Z) - Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
General purpose AI seems to have lowered the barriers for the public to use AI and harness its power.
We introduce PARTICIP-AI, a framework for laypeople to speculate and assess AI use cases and their impacts.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - The Role of Large Language Models in the Recognition of Territorial
Sovereignty: An Analysis of the Construction of Legitimacy [67.44950222243865]
We argue that technology tools like Google Maps and Large Language Models (LLM) are often perceived as impartial and objective.
We highlight the case of three controversial territories: Crimea, West Bank and Transnitria, by comparing the responses of ChatGPT against Wikipedia information and United Nations resolutions.
arXiv Detail & Related papers (2023-03-17T08:46:49Z) - Improving Human-AI Partnerships in Child Welfare: Understanding Worker
Practices, Challenges, and Desires for Algorithmic Decision Support [37.03030554731032]
We present findings from a series of interviews at a child welfare agency, to understand how they currently make AI-assisted child maltreatment screening decisions.
We observe how workers' reliance upon the ADS is guided by (1) their knowledge of rich, contextual information beyond what the AI model captures, (2) their beliefs about the ADS's capabilities and limitations relative to their own, and (4) awareness of misalignments between algorithmic predictions and their own decision-making objectives.
arXiv Detail & Related papers (2022-04-05T16:10:49Z) - Stakeholder Participation in AI: Beyond "Add Diverse Stakeholders and
Stir" [76.44130385507894]
This paper aims to ground what we dub a 'participatory turn' in AI design by synthesizing existing literature on participation and through empirical analysis of its current practices.
Based on our literature synthesis and empirical research, this paper presents a conceptual framework for analyzing participatory approaches to AI design.
arXiv Detail & Related papers (2021-11-01T17:57:04Z) - An Ethical Framework for Guiding the Development of Affectively-Aware
Artificial Intelligence [0.0]
We propose guidelines for evaluating the (moral and) ethical consequences of affectively-aware AI.
We propose a multi-stakeholder analysis framework that separates the ethical responsibilities of AI Developers vis-a-vis the entities that deploy such AI.
We end with recommendations for researchers, developers, operators, as well as regulators and law-makers.
arXiv Detail & Related papers (2021-07-29T03:57:53Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - Where Responsible AI meets Reality: Practitioner Perspectives on
Enablers for shifting Organizational Practices [3.119859292303396]
This paper examines and seeks to offer a framework for analyzing how organizational culture and structure impact the effectiveness of responsible AI initiatives in practice.
We present the results of semi-structured qualitative interviews with practitioners working in industry, investigating common challenges, ethical tensions, and effective enablers for responsible AI initiatives.
arXiv Detail & Related papers (2020-06-22T15:57:30Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.