Bringing AI Participation Down to Scale: A Comment on Open AIs Democratic Inputs to AI Project
- URL: http://arxiv.org/abs/2407.11613v1
- Date: Tue, 16 Jul 2024 11:22:34 GMT
- Title: Bringing AI Participation Down to Scale: A Comment on Open AIs Democratic Inputs to AI Project
- Authors: David Moats, Chandrima Ganguly,
- Abstract summary: We review the Open AI Democratic Inputs programme, which funded 10 teams to design procedures for public participation in generative AI.
We identify several shared assumptions including the generality of LLMs, extracting abstract values, soliciting solutions not problems and equating participation with democracy.
We call instead for AI participation which involves specific communities and use cases and solicits concrete problems to be remedied.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This commentary piece reviews the recent Open AI Democratic Inputs programme, which funded 10 teams to design procedures for public participation in generative AI. While applauding the technical innovations in these projects, we identify several shared assumptions including the generality of LLMs, extracting abstract values, soliciting solutions not problems and equating participation with democracy. We call instead for AI participation which involves specific communities and use cases and solicits concrete problems to be remedied. We also find it important that these communities have a stake in the outcome, including ownership of data or models.
Related papers
- Constraining Participation: Affordances of Feedback Features in Interfaces to Large Language Models [49.74265453289855]
Large language models (LLMs) are now accessible to anyone with a computer, a web browser, and an internet connection via browser-based interfaces.
This paper examines the affordances of interactive feedback features in ChatGPT's interface, analysing how they shape user input and participation in iteration.
arXiv Detail & Related papers (2024-08-27T13:50:37Z) - Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
General purpose AI seems to have lowered the barriers for the public to use AI and harness its power.
We introduce PARTICIP-AI, a framework for laypeople to speculate and assess AI use cases and their impacts.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - The Participatory Turn in AI Design: Theoretical Foundations and the
Current State of Practice [64.29355073494125]
This article aims to ground what we dub the "participatory turn" in AI design by synthesizing existing theoretical literature on participation.
We articulate empirical findings concerning the current state of participatory practice in AI design based on an analysis of recently published research and semi-structured interviews with 12 AI researchers and practitioners.
arXiv Detail & Related papers (2023-10-02T05:30:42Z) - Going public: the role of public participation approaches in commercial
AI labs [0.17205106391379024]
There is a dearth of evidence on attitudes to and approaches for participation in the sites driving major AI developments.
This paper explores how commercial AI labs understand participatory AI approaches and the obstacles they have faced implementing these practices.
arXiv Detail & Related papers (2023-06-16T14:34:28Z) - Queer In AI: A Case Study in Community-Led Participatory AI [40.38471083181686]
Queer in AI is a case study for community-led participatory design in AI.
We examine how participatory design and intersectional tenets started and shaped this community's programs.
arXiv Detail & Related papers (2023-03-29T19:12:13Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Should Machine Learning Models Report to Us When They Are Clueless? [0.0]
We report that AI models extrapolate outside their range of familiar data.
Knowing whether a model has extrapolated or not is a fundamental insight that should be included in explaining AI models.
arXiv Detail & Related papers (2022-03-23T01:50:24Z) - Stakeholder Participation in AI: Beyond "Add Diverse Stakeholders and
Stir" [76.44130385507894]
This paper aims to ground what we dub a 'participatory turn' in AI design by synthesizing existing literature on participation and through empirical analysis of its current practices.
Based on our literature synthesis and empirical research, this paper presents a conceptual framework for analyzing participatory approaches to AI design.
arXiv Detail & Related papers (2021-11-01T17:57:04Z) - Empowering Local Communities Using Artificial Intelligence [70.17085406202368]
It has become an important topic to explore the impact of AI on society from a people-centered perspective.
Previous works in citizen science have identified methods of using AI to engage the public in research.
This article discusses the challenges of applying AI in Community Citizen Science.
arXiv Detail & Related papers (2021-10-05T12:51:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.