Ethics and Governance of Artificial Intelligence: Evidence from a Survey
of Machine Learning Researchers
- URL: http://arxiv.org/abs/2105.02117v1
- Date: Wed, 5 May 2021 15:23:12 GMT
- Title: Ethics and Governance of Artificial Intelligence: Evidence from a Survey
of Machine Learning Researchers
- Authors: Baobao Zhang, Markus Anderljung, Lauren Kahn, Noemi Dreksler, Michael
C. Horowitz, Allan Dafoe
- Abstract summary: Machine learning (ML) and artificial intelligence (AI) researchers play an important role in the ethics and governance of AI.
We conducted a survey of those who published in the top AI/ML conferences.
We find that AI/ML researchers place high levels of trust in international organizations and scientific organizations.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine learning (ML) and artificial intelligence (AI) researchers play an
important role in the ethics and governance of AI, including taking action
against what they perceive to be unethical uses of AI (Belfield, 2020; Van
Noorden, 2020). Nevertheless, this influential group's attitudes are not well
understood, which undermines our ability to discern consensuses or
disagreements between AI/ML researchers. To examine these researchers' views,
we conducted a survey of those who published in the top AI/ML conferences (N =
524). We compare these results with those from a 2016 survey of AI/ML
researchers (Grace, Salvatier, Dafoe, Zhang, & Evans, 2018) and a 2018 survey
of the US public (Zhang & Dafoe, 2020). We find that AI/ML researchers place
high levels of trust in international organizations and scientific
organizations to shape the development and use of AI in the public interest;
moderate trust in most Western tech companies; and low trust in national
militaries, Chinese tech companies, and Facebook. While the respondents were
overwhelmingly opposed to AI/ML researchers working on lethal autonomous
weapons, they are less opposed to researchers working on other military
applications of AI, particularly logistics algorithms. A strong majority of
respondents think that AI safety research should be prioritized and that ML
institutions should conduct pre-publication review to assess potential harms.
Being closer to the technology itself, AI/ML re-searchers are well placed to
highlight new risks and develop technical solutions, so this novel attempt to
measure their attitudes has broad relevance. The findings should help to
improve how researchers, private sector executives, and policymakers think
about regulations, governance frameworks, guiding principles, and national and
international governance strategies for AI.
Related papers
- Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - The Narrow Depth and Breadth of Corporate Responsible AI Research [3.364518262921329]
We show that the majority of AI firms show limited or no engagement in this critical subfield of AI.
Leading AI firms exhibit significantly lower output in responsible AI research compared to their conventional AI research.
Our results highlight the urgent need for industry to publicly engage in responsible AI research.
arXiv Detail & Related papers (2024-05-20T17:26:43Z) - Now, Later, and Lasting: Ten Priorities for AI Research, Policy, and Practice [63.20307830884542]
Next several decades may well be a turning point for humanity, comparable to the industrial revolution.
Launched a decade ago, the project is committed to a perpetual series of studies by multidisciplinary experts.
We offer ten recommendations for action that collectively address both the short- and long-term potential impacts of AI technologies.
arXiv Detail & Related papers (2024-04-06T22:18:31Z) - Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
General purpose AI seems to have lowered the barriers for the public to use AI and harness its power.
We introduce PARTICIP-AI, a framework for laypeople to speculate and assess AI use cases and their impacts.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - Investigating Responsible AI for Scientific Research: An Empirical Study [4.597781832707524]
The push for Responsible AI (RAI) in such institutions underscores the increasing emphasis on integrating ethical considerations within AI design and development.
This paper aims to assess the awareness and preparedness regarding the ethical risks inherent in AI design and development.
Our results have revealed certain knowledge gaps concerning ethical, responsible, and inclusive AI, with limitations in awareness of the available AI ethics frameworks.
arXiv Detail & Related papers (2023-12-15T06:40:27Z) - Metaethical Perspectives on 'Benchmarking' AI Ethics [81.65697003067841]
Benchmarks are seen as the cornerstone for measuring technical progress in Artificial Intelligence (AI) research.
An increasingly prominent research area in AI is ethics, which currently has no set of benchmarks nor commonly accepted way for measuring the 'ethicality' of an AI system.
We argue that it makes more sense to talk about 'values' rather than 'ethics' when considering the possible actions of present and future AI systems.
arXiv Detail & Related papers (2022-04-11T14:36:39Z) - An Ethical Framework for Guiding the Development of Affectively-Aware
Artificial Intelligence [0.0]
We propose guidelines for evaluating the (moral and) ethical consequences of affectively-aware AI.
We propose a multi-stakeholder analysis framework that separates the ethical responsibilities of AI Developers vis-a-vis the entities that deploy such AI.
We end with recommendations for researchers, developers, operators, as well as regulators and law-makers.
arXiv Detail & Related papers (2021-07-29T03:57:53Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - The State of AI Ethics Report (October 2020) [30.265104923077185]
The State of AI Ethics captures the most relevant developments in the field of AI Ethics since July 2020.
This report aims to help anyone, from machine learning experts to human rights activists and policymakers, quickly digest and understand the ever-changing developments in the field.
The State of AI Ethics includes exclusive content written by world-class AI Ethics experts from universities, research institutes, consulting firms, and governments.
arXiv Detail & Related papers (2020-11-05T12:36:16Z) - U.S. Public Opinion on the Governance of Artificial Intelligence [0.0]
Existing studies find that the public's trust in institutions can play a major role in shaping the regulation of emerging technologies.
We examined Americans' perceptions of 13 AI governance challenges and their trust in governmental, corporate, and multistakeholder institutions to responsibly develop and manage AI.
arXiv Detail & Related papers (2019-12-30T07:38:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.