U.S. Public Opinion on the Governance of Artificial Intelligence
- URL: http://arxiv.org/abs/1912.12835v1
- Date: Mon, 30 Dec 2019 07:38:38 GMT
- Title: U.S. Public Opinion on the Governance of Artificial Intelligence
- Authors: Baobao Zhang and Allan Dafoe
- Abstract summary: Existing studies find that the public's trust in institutions can play a major role in shaping the regulation of emerging technologies.
We examined Americans' perceptions of 13 AI governance challenges and their trust in governmental, corporate, and multistakeholder institutions to responsibly develop and manage AI.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Artificial intelligence (AI) has widespread societal implications, yet social
scientists are only beginning to study public attitudes toward the technology.
Existing studies find that the public's trust in institutions can play a major
role in shaping the regulation of emerging technologies. Using a large-scale
survey (N=2000), we examined Americans' perceptions of 13 AI governance
challenges as well as their trust in governmental, corporate, and
multistakeholder institutions to responsibly develop and manage AI. While
Americans perceive all of the AI governance issues to be important for tech
companies and governments to manage, they have only low to moderate trust in
these institutions to manage AI applications.
Related papers
- AI, Global Governance, and Digital Sovereignty [1.3976439685325095]
We argue that AI systems will embed in global governance to create dueling dynamics of public/private cooperation and contestation.
We conclude by sketching future directions for IR research on AI and global governance.
arXiv Detail & Related papers (2024-10-23T00:05:33Z) - AI Governance in Higher Education: Case Studies of Guidance at Big Ten Universities [14.26619701452836]
Generative AI has drawn significant attention from stakeholders in higher education.
It simultaneously poses challenges to academic integrity and leads to ethical issues.
Leading universities have already published guidelines on Generative AI.
This study focuses on strategies for responsible AI governance as demonstrated in these guidelines.
arXiv Detail & Related papers (2024-09-03T16:06:45Z) - Public Perception of AI: Sentiment and Opportunity [0.0]
We present results of public perception of AI from a survey conducted with 10,000 respondents across ten countries in four continents around the world.
Results show that currently an equal percentage of respondents who believe AI will change the world as we know it, also believe AI needs to be heavily regulated.
arXiv Detail & Related papers (2024-07-22T19:11:28Z) - AI Procurement Checklists: Revisiting Implementation in the Age of AI Governance [18.290959557311552]
Public sector use of AI has been on the rise for the past decade, but only recently have efforts to enter it entered the cultural zeitgeist.
While simple to articulate, promoting ethical and effective roll outs of AI systems in government is a notoriously elusive task.
arXiv Detail & Related papers (2024-04-23T01:45:38Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - Both eyes open: Vigilant Incentives help Regulatory Markets improve AI
Safety [69.59465535312815]
Regulatory Markets for AI is a proposal designed with adaptability in mind.
It involves governments setting outcome-based targets for AI companies to achieve.
We warn that it is alarmingly easy to stumble on incentives which would prevent Regulatory Markets from achieving this goal.
arXiv Detail & Related papers (2023-03-06T14:42:05Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - Ethics and Governance of Artificial Intelligence: Evidence from a Survey
of Machine Learning Researchers [0.0]
Machine learning (ML) and artificial intelligence (AI) researchers play an important role in the ethics and governance of AI.
We conducted a survey of those who published in the top AI/ML conferences.
We find that AI/ML researchers place high levels of trust in international organizations and scientific organizations.
arXiv Detail & Related papers (2021-05-05T15:23:12Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.