Position Paper: Model Access should be a Key Concern in AI Governance
- URL: http://arxiv.org/abs/2412.00836v1
- Date: Sun, 01 Dec 2024 14:59:07 GMT
- Title: Position Paper: Model Access should be a Key Concern in AI Governance
- Authors: Edward Kembery, Ben Bucknall, Morgan Simpson,
- Abstract summary: downstream use cases, benefits, and risks of AI systems depend significantly on the access afforded to the system, and to whom.<n>We spotlight Model Access Governance, an emerging field focused on helping organisations and governments make responsible, evidence-based access decisions.<n>We make four sets of recommendations, aimed at helping AI evaluation organisations, frontier AI companies, governments and international bodies build consensus around empirically-driven access governance.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The downstream use cases, benefits, and risks of AI systems depend significantly on the access afforded to the system, and to whom. However, the downstream implications of different access styles are not well understood, making it difficult for decision-makers to govern model access responsibly. Consequently, we spotlight Model Access Governance, an emerging field focused on helping organisations and governments make responsible, evidence-based access decisions. We outline the motivation for developing this field by highlighting the risks of misgoverning model access, the limitations of existing research on the topic, and the opportunity for impact. We then make four sets of recommendations, aimed at helping AI evaluation organisations, frontier AI companies, governments and international bodies build consensus around empirically-driven access governance.
Related papers
- Bottom-Up Perspectives on AI Governance: Insights from User Reviews of AI Products [0.0]
This study adopts a bottom-up approach to explore how governance-relevant themes are expressed in user discourse.<n> Drawing on over 100,000 user reviews of AI products from G2.com, we apply BERTopic to extract latent themes and identify those most semantically related to AI governance.
arXiv Detail & Related papers (2025-05-30T01:33:21Z) - A Framework for the Private Governance of Frontier Artificial Intelligence [0.0]
The paper presents a proposal for the governance of frontier AI systems through a hybrid public-private system.
Private bodies, authorized and overseen by government, provide certifications to developers of frontier AI systems on an opt-in basis.
In exchange for opting in, frontier AI firms receive protections from tort liability for customer misuse of their models.
arXiv Detail & Related papers (2025-04-15T02:56:26Z) - In-House Evaluation Is Not Enough: Towards Robust Third-Party Flaw Disclosure for General-Purpose AI [93.33036653316591]
We call for three interventions to advance system safety.
First, we propose using standardized AI flaw reports and rules of engagement for researchers.
Second, we propose GPAI system providers adopt broadly-scoped flaw disclosure programs.
Third, we advocate for the development of improved infrastructure to coordinate distribution of flaw reports.
arXiv Detail & Related papers (2025-03-21T05:09:46Z) - Media and responsible AI governance: a game-theoretic and LLM analysis [61.132523071109354]
This paper investigates the interplay between AI developers, regulators, users, and the media in fostering trustworthy AI systems.
Using evolutionary game theory and large language models (LLMs), we model the strategic interactions among these actors under different regulatory regimes.
arXiv Detail & Related papers (2025-03-12T21:39:38Z) - Enabling External Scrutiny of AI Systems with Privacy-Enhancing Technologies [0.0]
This article describes how technical infrastructure developed by the nonprofit OpenMined enables external scrutiny of AI systems without compromising sensitive information.
In practice, external researchers have struggled to gain access to AI systems because of AI companies' legitimate concerns about security, privacy, and intellectual property.
PETs have reached a new level of maturity: end-to-end technical infrastructure developed by OpenMined combines several PETs into various setups that enable privacy-preserving audits of AI systems.
arXiv Detail & Related papers (2025-02-05T15:31:11Z) - Assistive AI for Augmenting Human Decision-making [3.379906135388703]
The paper shows how AI can assist in the complex process of decision-making while maintaining human oversight.
Central to our framework are the principles of privacy, accountability, and credibility.
arXiv Detail & Related papers (2024-10-18T10:16:07Z) - Do Responsible AI Artifacts Advance Stakeholder Goals? Four Key Barriers Perceived by Legal and Civil Stakeholders [59.17981603969404]
The responsible AI (RAI) community has introduced numerous processes and artifacts to facilitate transparency and support the governance of AI systems.
We conduct semi-structured interviews with 19 government, legal, and civil society stakeholders who inform policy and advocacy around responsible AI efforts.
We organize these beliefs into four barriers that help explain how RAI artifacts may (inadvertently) reconfigure power relations across civil society, government, and industry.
arXiv Detail & Related papers (2024-08-22T00:14:37Z) - Open Problems in Technical AI Governance [93.89102632003996]
Technical AI governance refers to technical analysis and tools for supporting the effective governance of AI.
This paper is intended as a resource for technical researchers or research funders looking to contribute to AI governance.
arXiv Detail & Related papers (2024-07-20T21:13:56Z) - Studying Up Public Sector AI: How Networks of Power Relations Shape Agency Decisions Around AI Design and Use [29.52245155918532]
We study public sector AI around those who have the power and responsibility to make decisions about the role that AI tools will play in their agency.
Our findings shed light on how infrastructural, legal, and social factors create barriers and disincentives to the involvement of a broader range of stakeholders in decisions about AI design and adoption.
arXiv Detail & Related papers (2024-05-21T02:31:26Z) - Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
General purpose AI seems to have lowered the barriers for the public to use AI and harness its power.
We introduce PARTICIP-AI, a framework for laypeople to speculate and assess AI use cases and their impacts.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - 'Team-in-the-loop': Ostrom's IAD framework 'rules in use' to map and measure contextual impacts of AI [0.0]
This article explores how the 'rules in use' from Ostrom's Institutional Analysis and Development Framework (IAD) can be developed as a context analysis approach for AI.
arXiv Detail & Related papers (2023-03-24T14:01:00Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.