Public Opinion and The Rise of Digital Minds: Perceived Risk, Trust, and Regulation Support
- URL: http://arxiv.org/abs/2504.21849v1
- Date: Wed, 30 Apr 2025 17:56:23 GMT
- Title: Public Opinion and The Rise of Digital Minds: Perceived Risk, Trust, and Regulation Support
- Authors: Justin B. Bullock, Janet V. T. Pauketat, Hsini Huang, Yi-Fan Wang, Jacy Reese Anthis,
- Abstract summary: This study examines how public trust in institutions and AI technologies, along with perceived risks, shape preferences for AI regulation.<n>Individuals with higher trust in government favor regulation, while those with greater trust in AI companies and AI technologies are less inclined to support restrictions.
- Score: 4.982210700018631
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Governance institutions must respond to societal risks, including those posed by generative AI. This study empirically examines how public trust in institutions and AI technologies, along with perceived risks, shape preferences for AI regulation. Using the nationally representative 2023 Artificial Intelligence, Morality, and Sentience (AIMS) survey, we assess trust in government, AI companies, and AI technologies, as well as public support for regulatory measures such as slowing AI development or outright bans on advanced AI. Our findings reveal broad public support for AI regulation, with risk perception playing a significant role in shaping policy preferences. Individuals with higher trust in government favor regulation, while those with greater trust in AI companies and AI technologies are less inclined to support restrictions. Trust in government and perceived risks significantly predict preferences for both soft (e.g., slowing development) and strong (e.g., banning AI systems) regulatory interventions. These results highlight the importance of public opinion in AI governance. As AI capabilities advance, effective regulation will require balancing public concerns about risks with trust in institutions. This study provides a foundational empirical baseline for policymakers navigating AI governance and underscores the need for further research into public trust, risk perception, and regulatory strategies in the evolving AI landscape.
Related papers
- Media and responsible AI governance: a game-theoretic and LLM analysis [61.132523071109354]
This paper investigates the interplay between AI developers, regulators, users, and the media in fostering trustworthy AI systems.
Using evolutionary game theory and large language models (LLMs), we model the strategic interactions among these actors under different regulatory regimes.
arXiv Detail & Related papers (2025-03-12T21:39:38Z) - AI Procurement Checklists: Revisiting Implementation in the Age of AI Governance [18.290959557311552]
Public sector use of AI has been on the rise for the past decade, but only recently have efforts to enter it entered the cultural zeitgeist.
While simple to articulate, promoting ethical and effective roll outs of AI systems in government is a notoriously elusive task.
arXiv Detail & Related papers (2024-04-23T01:45:38Z) - Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
General purpose AI seems to have lowered the barriers for the public to use AI and harness its power.
We introduce PARTICIP-AI, a framework for laypeople to speculate and assess AI use cases and their impacts.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - A Safe Harbor for AI Evaluation and Red Teaming [124.89885800509505]
Some researchers fear that conducting such research or releasing their findings will result in account suspensions or legal reprisal.
We propose that major AI developers commit to providing a legal and technical safe harbor.
We believe these commitments are a necessary step towards more inclusive and unimpeded community efforts to tackle the risks of generative AI.
arXiv Detail & Related papers (2024-03-07T20:55:08Z) - Regulating AI-Based Remote Biometric Identification. Investigating the Public Demand for Bans, Audits, and Public Database Registrations [0.0]
The study focuses on the role of trust in AI as well as trust in law enforcement as potential factors that may lead to demands for regulation of AI technology.
We show that perceptions of discrimination lead to a demand for stronger regulation, while trust in AI and trust in law enforcement lead to opposite effects in terms of demand for a ban on RBI systems.
arXiv Detail & Related papers (2024-01-24T17:22:33Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - Both eyes open: Vigilant Incentives help Regulatory Markets improve AI
Safety [69.59465535312815]
Regulatory Markets for AI is a proposal designed with adaptability in mind.
It involves governments setting outcome-based targets for AI companies to achieve.
We warn that it is alarmingly easy to stumble on incentives which would prevent Regulatory Markets from achieving this goal.
arXiv Detail & Related papers (2023-03-06T14:42:05Z) - Ethics and Governance of Artificial Intelligence: Evidence from a Survey
of Machine Learning Researchers [0.0]
Machine learning (ML) and artificial intelligence (AI) researchers play an important role in the ethics and governance of AI.
We conducted a survey of those who published in the top AI/ML conferences.
We find that AI/ML researchers place high levels of trust in international organizations and scientific organizations.
arXiv Detail & Related papers (2021-05-05T15:23:12Z) - The Sanction of Authority: Promoting Public Trust in AI [4.729969944853141]
We argue that public distrust of AI originates from the under-development of a regulatory ecosystem that would guarantee the trustworthiness of the AIs that pervade society.
We elaborate the pivotal role of externally auditable AI documentation within this model and the work to be done to ensure it is effective.
arXiv Detail & Related papers (2021-01-22T22:01:30Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z) - U.S. Public Opinion on the Governance of Artificial Intelligence [0.0]
Existing studies find that the public's trust in institutions can play a major role in shaping the regulation of emerging technologies.
We examined Americans' perceptions of 13 AI governance challenges and their trust in governmental, corporate, and multistakeholder institutions to responsibly develop and manage AI.
arXiv Detail & Related papers (2019-12-30T07:38:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.