Forecasting AI Progress: Evidence from a Survey of Machine Learning
Researchers
- URL: http://arxiv.org/abs/2206.04132v1
- Date: Wed, 8 Jun 2022 19:05:12 GMT
- Title: Forecasting AI Progress: Evidence from a Survey of Machine Learning
Researchers
- Authors: Baobao Zhang, Noemi Dreksler, Markus Anderljung, Lauren Kahn, Charlie
Giattino, Allan Dafoe, Michael C. Horowitz
- Abstract summary: We report the results from a large survey of AI and machine learning (ML) researchers on their beliefs about progress in AI.
In aggregate, AI/ML researchers surveyed placed a 50% likelihood of human-level machine intelligence being achieved by 2060.
Forecasts of several near-term AI milestones have reduced in time, suggesting more optimism about AI progress.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Advances in artificial intelligence (AI) are shaping modern life, from
transportation, health care, science, finance, to national defense. Forecasts
of AI development could help improve policy- and decision-making. We report the
results from a large survey of AI and machine learning (ML) researchers on
their beliefs about progress in AI. The survey, fielded in late 2019, elicited
forecasts for near-term AI development milestones and high- or human-level
machine intelligence, defined as when machines are able to accomplish every or
almost every task humans are able to do currently. As part of this study, we
re-contacted respondents from a highly-cited study by Grace et al. (2018), in
which AI/ML researchers gave forecasts about high-level machine intelligence
and near-term milestones in AI development. Results from our 2019 survey show
that, in aggregate, AI/ML researchers surveyed placed a 50% likelihood of
human-level machine intelligence being achieved by 2060. The results show
researchers newly contacted in 2019 expressed similar beliefs about the
progress of advanced AI as respondents in the Grace et al. (2018) survey. For
the recontacted participants from the Grace et al. (2018) study, the aggregate
forecast for a 50% likelihood of high-level machine intelligence shifted from
2062 to 2076, although this change is not statistically significant, likely due
to the small size of our panel sample. Forecasts of several near-term AI
milestones have reduced in time, suggesting more optimism about AI progress.
Finally, AI/ML researchers also exhibited significant optimism about how
human-level machine intelligence will impact society.
Related papers
- What Do People Think about Sentient AI? [0.0]
We present the first nationally representative survey data on the topic of sentient AI.
Across one wave of data collection in 2021 and two in 2023, we found mind perception and moral concern for AI well-being was higher than predicted.
We argue that, whether or not AIs become sentient, the discussion itself may overhaul human-computer interaction.
arXiv Detail & Related papers (2024-07-11T21:04:39Z) - AI for social science and social science of AI: A Survey [47.5235291525383]
Recent advancements in artificial intelligence have sparked a rethinking of artificial general intelligence possibilities.
The increasing human-like capabilities of AI are also attracting attention in social science research.
arXiv Detail & Related papers (2024-01-22T10:57:09Z) - Thousands of AI Authors on the Future of AI [1.0717301750064765]
Most respondents expressed substantial uncertainty about the long-term value of AI progress.
More than half suggested that "substantial" or "extreme" concern is warranted about six different AI-related scenarios.
There was disagreement about whether faster or slower AI progress would be better for the future of humanity.
arXiv Detail & Related papers (2024-01-05T14:53:09Z) - Artificial intelligence adoption in the physical sciences, natural
sciences, life sciences, social sciences and the arts and humanities: A
bibliometric analysis of research publications from 1960-2021 [73.06361680847708]
In 1960 14% of 333 research fields were related to AI, but this increased to over half of all research fields by 1972, over 80% by 1986 and over 98% in current times.
In 1960 14% of 333 research fields were related to AI (many in computer science), but this increased to over half of all research fields by 1972, over 80% by 1986 and over 98% in current times.
We conclude that the context of the current surge appears different, and that interdisciplinary AI application is likely to be sustained.
arXiv Detail & Related papers (2023-06-15T14:08:07Z) - Artificial Intelligence and Life in 2030: The One Hundred Year Study on
Artificial Intelligence [74.2630823914258]
The report examines eight domains of typical urban settings on which AI is likely to have impact over the coming years.
It aims to provide the general public with a scientifically and technologically accurate portrayal of the current state of AI.
The charge for this report was given to the panel by the AI100 Standing Committee, chaired by Barbara Grosz of Harvard University.
arXiv Detail & Related papers (2022-10-31T18:35:36Z) - Gathering Strength, Gathering Storms: The One Hundred Year Study on
Artificial Intelligence (AI100) 2021 Study Panel Report [40.38252510399319]
"Gathering Strength, Gathering Storms" is the second report in the "One Hundred Year Study on Artificial Intelligence" project.
It was written by a panel of 17 study authors, each of whom is deeply rooted in AI research.
The report concludes that AI has made a major leap from the lab to people's lives in recent years.
arXiv Detail & Related papers (2022-10-27T21:00:36Z) - Predicting the Future of AI with AI: High-quality link prediction in an
exponentially growing knowledge network [15.626884746513712]
We use AI techniques to predict the future research directions of AI itself.
For that, we use more than 100,000 research papers and build up a knowledge network with more than 64,000 concept nodes.
The most powerful methods use a carefully curated set of network features, rather than an end-to-end AI approach.
arXiv Detail & Related papers (2022-09-23T14:04:37Z) - On the Influence of Explainable AI on Automation Bias [0.0]
We aim to shed light on the potential to influence automation bias by explainable AI (XAI)
We conduct an online experiment with regard to hotel review classifications and discuss first results.
arXiv Detail & Related papers (2022-04-19T12:54:23Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Empowering Things with Intelligence: A Survey of the Progress,
Challenges, and Opportunities in Artificial Intelligence of Things [98.10037444792444]
We show how AI can empower the IoT to make it faster, smarter, greener, and safer.
First, we present progress in AI research for IoT from four perspectives: perceiving, learning, reasoning, and behaving.
Finally, we summarize some promising applications of AIoT that are likely to profoundly reshape our world.
arXiv Detail & Related papers (2020-11-17T13:14:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.