Implications for Governance in Public Perceptions of Societal-scale AI Risks
- URL: http://arxiv.org/abs/2406.06199v1
- Date: Mon, 10 Jun 2024 11:52:25 GMT
- Title: Implications for Governance in Public Perceptions of Societal-scale AI Risks
- Authors: Ross Gruetzemacher, Toby D. Pilditch, Huigang Liang, Christy Manning, Vael Gates, David Moss, James W. B. Elsey, Willem W. A. Sleegers, Kyle Kilian,
- Abstract summary: Voters perceive AI risks as both more likely and more impactful than experts, and also advocate for slower AI development.
Policy interventions may best assuage collective concerns if they attempt to more carefully balance mitigation efforts across all classes of societal-scale risks.
- Score: 0.29022435221103454
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Amid growing concerns over AI's societal risks--ranging from civilizational collapse to misinformation and systemic bias--this study explores the perceptions of AI experts and the general US registered voters on the likelihood and impact of 18 specific AI risks, alongside their policy preferences for managing these risks. While both groups favor international oversight over national or corporate governance, our survey reveals a discrepancy: voters perceive AI risks as both more likely and more impactful than experts, and also advocate for slower AI development. Specifically, our findings indicate that policy interventions may best assuage collective concerns if they attempt to more carefully balance mitigation efforts across all classes of societal-scale risks, effectively nullifying the near-vs-long-term debate over AI risks. More broadly, our results will serve not only to enable more substantive policy discussions for preventing and mitigating AI risks, but also to underscore the challenge of consensus building for effective policy implementation.
Related papers
- Reasons to Doubt the Impact of AI Risk Evaluations [0.0]
This paper asks whether evaluations significantly improve our understanding of AI risks and our ability to mitigate those risks.
It concludes with considerations for improving evaluation practices and 12 recommendations for AI labs, external evaluators, regulators, and academic researchers.
arXiv Detail & Related papers (2024-08-05T15:42:51Z) - AI Risk Categorization Decoded (AIR 2024): From Government Regulations to Corporate Policies [88.32153122712478]
We identify 314 unique risk categories organized into a four-tiered taxonomy.
At the highest level, this taxonomy encompasses System & Operational Risks, Content Safety Risks, Societal Risks, and Legal & Rights Risks.
We aim to advance AI safety through information sharing across sectors and the promotion of best practices in risk mitigation for generative AI models and systems.
arXiv Detail & Related papers (2024-06-25T18:13:05Z) - US-China perspectives on extreme AI risks and global governance [0.0]
We sought to better understand how experts in each country describe safety and security threats from advanced artificial intelligence.
We focused our analysis on advanced forms of artificial intelligence, such as artificial general intelligence (AGI)
Experts in both countries expressed concern about risks from AGI, risks from intelligence explosions, and risks from AI systems that escape human control.
arXiv Detail & Related papers (2024-06-23T17:31:27Z) - Risks and Opportunities of Open-Source Generative AI [64.86989162783648]
Applications of Generative AI (Gen AI) are expected to revolutionize a number of different areas, ranging from science & medicine to education.
The potential for these seismic changes has triggered a lively debate about the potential risks of the technology, and resulted in calls for tighter regulation.
This regulation is likely to put at risk the budding field of open-source generative AI.
arXiv Detail & Related papers (2024-05-14T13:37:36Z) - Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
General purpose AI seems to have lowered the barriers for the public to use AI and harness its power.
We introduce PARTICIP-AI, a framework for laypeople to speculate and assess AI use cases and their impacts.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - The risks of risk-based AI regulation: taking liability seriously [46.90451304069951]
The development and regulation of AI seems to have reached a critical stage.
Some experts are calling for a moratorium on the training of AI systems more powerful than GPT-4.
This paper analyses the most advanced legal proposal, the European Union's AI Act.
arXiv Detail & Related papers (2023-11-03T12:51:37Z) - Taking control: Policies to address extinction risks from AI [0.0]
We argue that voluntary commitments from AI companies would be an inappropriate and insufficient response.
We describe three policy proposals that would meaningfully address the threats from advanced AI.
arXiv Detail & Related papers (2023-10-31T15:53:14Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - An International Consortium for Evaluations of Societal-Scale Risks from
Advanced AI [10.550015825854837]
A regulatory gap has permitted AI labs to conduct research, development, and deployment activities with minimal oversight.
frontier AI system evaluations have been proposed as a way of assessing risks from the development and deployment of frontier AI systems.
This paper proposes a solution in the form of an international consortium for AI risk evaluations, comprising both AI developers and third-party AI risk evaluators.
arXiv Detail & Related papers (2023-10-22T23:37:48Z) - An Overview of Catastrophic AI Risks [38.84933208563934]
This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories.
Malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs.
organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents.
rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans.
arXiv Detail & Related papers (2023-06-21T03:35:06Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.