Envisioning Stakeholder-Action Pairs to Mitigate Negative Impacts of AI: A Participatory Approach to Inform Policy Making
- URL: http://arxiv.org/abs/2502.14869v1
- Date: Fri, 24 Jan 2025 22:57:18 GMT
- Title: Envisioning Stakeholder-Action Pairs to Mitigate Negative Impacts of AI: A Participatory Approach to Inform Policy Making
- Authors: Julia Barnett, Kimon Kieslich, Natali Helberger, Nicholas Diakopoulos,
- Abstract summary: The potential for negative impacts of AI has rapidly become more pervasive around the world.<n>This has intensified a need for responsible AI governance.<n> Ensuring that AI policies align with democratic expectations requires methods that prioritize the voices and needs of those impacted.
- Score: 2.981139602986498
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The potential for negative impacts of AI has rapidly become more pervasive around the world, and this has intensified a need for responsible AI governance. While many regulatory bodies endorse risk-based approaches and a multitude of risk mitigation practices are proposed by companies and academic scholars, these approaches are commonly expert-centered and thus lack the inclusion of a significant group of stakeholders. Ensuring that AI policies align with democratic expectations requires methods that prioritize the voices and needs of those impacted. In this work we develop a participative and forward-looking approach to inform policy-makers and academics that grounds the needs of lay stakeholders at the forefront and enriches the development of risk mitigation strategies. Our approach (1) maps potential mitigation and prevention strategies of negative AI impacts that assign responsibility to various stakeholders, (2) explores the importance and prioritization thereof in the eyes of laypeople, and (3) presents these insights in policy fact sheets, i.e., a digestible format for informing policy processes. We emphasize that this approach is not targeted towards replacing policy-makers; rather our aim is to present an informative method that enriches mitigation strategies and enables a more participatory approach to policy development.
Related papers
- Media and responsible AI governance: a game-theoretic and LLM analysis [61.132523071109354]
This paper investigates the interplay between AI developers, regulators, users, and the media in fostering trustworthy AI systems.
Using evolutionary game theory and large language models (LLMs), we model the strategic interactions among these actors under different regulatory regimes.
arXiv Detail & Related papers (2025-03-12T21:39:38Z) - Securing External Deeper-than-black-box GPAI Evaluations [49.1574468325115]
This paper examines the critical challenges and potential solutions for conducting secure and effective external evaluations of general-purpose AI (GPAI) models.
With the exponential growth in size, capability, reach and accompanying risk, ensuring accountability, safety, and public trust requires frameworks that go beyond traditional black-box methods.
arXiv Detail & Related papers (2025-03-10T16:13:45Z) - Global Perspectives of AI Risks and Harms: Analyzing the Negative Impacts of AI Technologies as Prioritized by News Media [3.2566808526538873]
AI technologies have the potential to drive economic growth and innovation but can also pose significant risks to society.<n>One way to understand these nuances is by looking at how the media reports on AI.<n>We analyze a broad and diverse sample of global news media spanning 27 countries across Asia, Africa, Europe, Middle East, North America, and Oceania.
arXiv Detail & Related papers (2025-01-23T19:14:11Z) - Towards Responsible Governing AI Proliferation [0.0]
The paper introduces the Proliferation' paradigm, which anticipates the rise of smaller, decentralized, open-sourced AI models.
It posits that these developments are both probable and likely to introduce both benefits and novel risks.
arXiv Detail & Related papers (2024-12-18T13:10:35Z) - Implications for Governance in Public Perceptions of Societal-scale AI Risks [0.29022435221103454]
Voters perceive AI risks as both more likely and more impactful than experts, and also advocate for slower AI development.
Policy interventions may best assuage collective concerns if they attempt to more carefully balance mitigation efforts across all classes of societal-scale risks.
arXiv Detail & Related papers (2024-06-10T11:52:25Z) - Simulating Policy Impacts: Developing a Generative Scenario Writing Method to Evaluate the Perceived Effects of Regulation [3.2566808526538873]
We use GPT-4 to generate scenarios both pre- and post-introduction of policy.
We then run a user study to evaluate these scenarios across four risk-assessment dimensions.
We find that this transparency legislation is perceived to be effective at mitigating harms in areas such as labor and well-being, but largely ineffective in areas such as social cohesion and security.
arXiv Detail & Related papers (2024-05-15T19:44:54Z) - LLM as a Mastermind: A Survey of Strategic Reasoning with Large Language Models [75.89014602596673]
Strategic reasoning requires understanding and predicting adversary actions in multi-agent settings while adjusting strategies accordingly.
We explore the scopes, applications, methodologies, and evaluation metrics related to strategic reasoning with Large Language Models.
It underscores the importance of strategic reasoning as a critical cognitive capability and offers insights into future research directions and potential improvements.
arXiv Detail & Related papers (2024-04-01T16:50:54Z) - Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
General purpose AI seems to have lowered the barriers for the public to use AI and harness its power.
We introduce PARTICIP-AI, a framework for laypeople to speculate and assess AI use cases and their impacts.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - On strategies for risk management and decision making under uncertainty shared across multiple fields [55.2480439325792]
The paper finds more than 110 examples of such strategies and this approach to risk is termed RDOT: Risk-reducing Design and Operations Toolkit.
RDOT strategies fall into six broad categories: structural, reactive, formal, adversarial, multi-stage and positive.
Overall, RDOT represents an overlooked class of versatile responses to uncertainty.
arXiv Detail & Related papers (2023-09-06T16:14:32Z) - On the Value of Myopic Behavior in Policy Reuse [67.37788288093299]
Leveraging learned strategies in unfamiliar scenarios is fundamental to human intelligence.
In this work, we present a framework called Selective Myopic bEhavior Control(SMEC)
SMEC adaptively aggregates the sharable short-term behaviors of prior policies and the long-term behaviors of the task policy, leading to coordinated decisions.
arXiv Detail & Related papers (2023-05-28T03:59:37Z) - Institutionalising Ethics in AI through Broader Impact Requirements [8.793651996676095]
We reflect on a novel governance initiative by one of the world's largest AI conferences.
NeurIPS introduced a requirement for submitting authors to include a statement on the broader societal impacts of their research.
We investigate the risks, challenges and potential benefits of such an initiative.
arXiv Detail & Related papers (2021-05-30T12:36:43Z) - Learning Goal-oriented Dialogue Policy with Opposite Agent Awareness [116.804536884437]
We propose an opposite behavior aware framework for policy learning in goal-oriented dialogues.
We estimate the opposite agent's policy from its behavior and use this estimation to improve the target agent by regarding it as part of the target policy.
arXiv Detail & Related papers (2020-04-21T03:13:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.