Do Responsible AI Artifacts Advance Stakeholder Goals? Four Key Barriers Perceived by Legal and Civil Stakeholders
- URL: http://arxiv.org/abs/2408.12047v1
- Date: Thu, 22 Aug 2024 00:14:37 GMT
- Title: Do Responsible AI Artifacts Advance Stakeholder Goals? Four Key Barriers Perceived by Legal and Civil Stakeholders
- Authors: Anna Kawakami, Daricia Wilkinson, Alexandra Chouldechova,
- Abstract summary: The responsible AI (RAI) community has introduced numerous processes and artifacts to facilitate transparency and support the governance of AI systems.
We conduct semi-structured interviews with 19 government, legal, and civil society stakeholders who inform policy and advocacy around responsible AI efforts.
We organize these beliefs into four barriers that help explain how RAI artifacts may (inadvertently) reconfigure power relations across civil society, government, and industry.
- Score: 59.17981603969404
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The responsible AI (RAI) community has introduced numerous processes and artifacts (e.g., Model Cards, Transparency Notes, Data Cards) to facilitate transparency and support the governance of AI systems. While originally designed to scaffold and document AI development processes in technology companies, these artifacts are becoming central components of regulatory compliance under recent regulations such as the EU AI Act. Much prior work has explored the design of new RAI artifacts or their use by practitioners within technology companies. However, as RAI artifacts begin to play key roles in enabling external oversight, it becomes critical to understand how stakeholders--particularly those situated outside of technology companies who govern and audit industry AI deployments--perceive the efficacy of RAI artifacts. In this study, we conduct semi-structured interviews and design activities with 19 government, legal, and civil society stakeholders who inform policy and advocacy around responsible AI efforts. While participants believe that RAI artifacts are a valuable contribution to the broader AI governance ecosystem, many are concerned about their potential unintended, longer-term impacts on actors outside of technology companies (e.g., downstream end-users, policymakers, civil society stakeholders). We organize these beliefs into four barriers that help explain how RAI artifacts may (inadvertently) reconfigure power relations across civil society, government, and industry, impeding civil society and legal stakeholders' ability to protect downstream end-users from potential AI harms. Participants envision how structural changes, along with changes in how RAI artifacts are designed, used, and governed, could help redirect the role of artifacts to support more collaborative and proactive external oversight of AI systems. We discuss research and policy implications for RAI artifacts.
Related papers
- Assistive AI for Augmenting Human Decision-making [3.379906135388703]
The paper shows how AI can assist in the complex process of decision-making while maintaining human oversight.
Central to our framework are the principles of privacy, accountability, and credibility.
arXiv Detail & Related papers (2024-10-18T10:16:07Z) - Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - Studying Up Public Sector AI: How Networks of Power Relations Shape Agency Decisions Around AI Design and Use [29.52245155918532]
We study public sector AI around those who have the power and responsibility to make decisions about the role that AI tools will play in their agency.
Our findings shed light on how infrastructural, legal, and social factors create barriers and disincentives to the involvement of a broader range of stakeholders in decisions about AI design and adoption.
arXiv Detail & Related papers (2024-05-21T02:31:26Z) - Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
General purpose AI seems to have lowered the barriers for the public to use AI and harness its power.
We introduce PARTICIP-AI, a framework for laypeople to speculate and assess AI use cases and their impacts.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - A Safe Harbor for AI Evaluation and Red Teaming [124.89885800509505]
Some researchers fear that conducting such research or releasing their findings will result in account suspensions or legal reprisal.
We propose that major AI developers commit to providing a legal and technical safe harbor.
We believe these commitments are a necessary step towards more inclusive and unimpeded community efforts to tackle the risks of generative AI.
arXiv Detail & Related papers (2024-03-07T20:55:08Z) - Investigating Algorithm Review Boards for Organizational Responsible
Artificial Intelligence Governance [0.16385815610837165]
We interviewed 17 technical contributors across organization types about their experiences with internal RAI governance.
We summarized the first detailed findings on algorithm review boards (ARBs) and similar review committees in practice.
Our results suggest that integration with existing internal regulatory approaches and leadership buy-in are among the most important attributes for success.
arXiv Detail & Related papers (2024-01-23T20:53:53Z) - An Ethical Framework for Guiding the Development of Affectively-Aware
Artificial Intelligence [0.0]
We propose guidelines for evaluating the (moral and) ethical consequences of affectively-aware AI.
We propose a multi-stakeholder analysis framework that separates the ethical responsibilities of AI Developers vis-a-vis the entities that deploy such AI.
We end with recommendations for researchers, developers, operators, as well as regulators and law-makers.
arXiv Detail & Related papers (2021-07-29T03:57:53Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - Where Responsible AI meets Reality: Practitioner Perspectives on
Enablers for shifting Organizational Practices [3.119859292303396]
This paper examines and seeks to offer a framework for analyzing how organizational culture and structure impact the effectiveness of responsible AI initiatives in practice.
We present the results of semi-structured qualitative interviews with practitioners working in industry, investigating common challenges, ethical tensions, and effective enablers for responsible AI initiatives.
arXiv Detail & Related papers (2020-06-22T15:57:30Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.