When Will AI Transform Society? Swedish Public Predictions on AI Development Timelines
- URL: http://arxiv.org/abs/2504.04180v1
- Date: Sat, 05 Apr 2025 13:57:04 GMT
- Title: When Will AI Transform Society? Swedish Public Predictions on AI Development Timelines
- Authors: Filip Fors Connolly, Mikael Hjerm, Sara Kalucza,
- Abstract summary: This study investigates public expectations regarding the likelihood and timing of major artificial intelligence (AI) developments among Swedes.<n>We examined expectations across six key scenarios: medical breakthroughs, mass unemployment, democratic deterioration, living standard improvements, artificial general intelligence (AGI) and uncontrollable superintelligent AI.<n>Findings reveal strong consensus on AI-driven medical breakthroughs (82.6%), while expectations for other major developments are significantly lower.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This study investigates public expectations regarding the likelihood and timing of major artificial intelligence (AI) developments among Swedes. Through a mixed-mode survey (web/paper) of 1,026 respondents, we examined expectations across six key scenarios: medical breakthroughs, mass unemployment, democratic deterioration, living standard improvements, artificial general intelligence (AGI), and uncontrollable superintelligent AI. Findings reveal strong consensus on AI-driven medical breakthroughs (82.6%), while expectations for other major developments are significantly lower, ranging from 40.9% for mass unemployment down to 28.4% for AGI. Timeline expectations varied significantly, with major medical advances anticipated within 6-10 years, while more transformative developments like AGI were projected beyond 20 years. Latent class analysis identified three distinct groups: optimists (46.7%), ambivalents (42.2%), and skeptics (11.2%). The optimist group showed higher levels of self-rated AI knowledge and education, while gender differences were also observed across classes. The study addresses a critical gap in understanding temporal expectations of AI development among the general public, offering insights for policymakers and stakeholders.
Related papers
- Neuro-Symbolic AI in 2024: A Systematic Review [0.29260385019352086]
The review followed the PRISMA methodology, utilizing databases such as IEEE Explore, Google Scholar, arXiv, ACM, and SpringerLink.
From an initial pool of 1,428 papers, 167 met the inclusion criteria and were analyzed in detail.
The majority of research efforts are concentrated in the areas of learning and inference, logic and reasoning, and knowledge representation.
arXiv Detail & Related papers (2025-01-09T18:48:35Z) - Perceptions of Sentient AI and Other Digital Minds: Evidence from the AI, Morality, and Sentience (AIMS) Survey [0.0]
One in five U.S. adults believed some AI systems are currently sentient, and 38% supported legal rights for sentient AI.<n>The median 2023 forecast was that sentient AI would arrive in just five years.<n>The development of safe and beneficial AI requires not just technical study but understanding the complex ways in which humans perceive and coexist with digital minds.
arXiv Detail & Related papers (2024-07-11T21:04:39Z) - AIGIQA-20K: A Large Database for AI-Generated Image Quality Assessment [54.93996119324928]
We create the largest AIGI subjective quality database to date with 20,000 AIGIs and 420,000 subjective ratings, known as AIGIQA-20K.
We conduct benchmark experiments on this database to assess the correspondence between 16 mainstream AIGI quality models and human perception.
arXiv Detail & Related papers (2024-04-04T12:12:24Z) - Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
General purpose AI seems to have lowered the barriers for the public to use AI and harness its power.
We introduce PARTICIP-AI, a framework for laypeople to speculate and assess AI use cases and their impacts.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - Thousands of AI Authors on the Future of AI [1.0717301750064765]
Most respondents expressed substantial uncertainty about the long-term value of AI progress.
More than half suggested that "substantial" or "extreme" concern is warranted about six different AI-related scenarios.
There was disagreement about whether faster or slower AI progress would be better for the future of humanity.
arXiv Detail & Related papers (2024-01-05T14:53:09Z) - Approaches to Generative Artificial Intelligence, A Social Justice
Perspective [0.0]
Rise of AI-driven writing assistance, dubbed 'AI-giarism' by Chan, will make plagiarism more accessible and less detectable.
This paper aims to explore generative AI from a social justice perspective, examining the training of these models, the inherent biases, and the potential injustices in detecting AI-generated writing.
arXiv Detail & Related papers (2023-08-17T06:30:46Z) - Artificial intelligence adoption in the physical sciences, natural
sciences, life sciences, social sciences and the arts and humanities: A
bibliometric analysis of research publications from 1960-2021 [73.06361680847708]
In 1960 14% of 333 research fields were related to AI, but this increased to over half of all research fields by 1972, over 80% by 1986 and over 98% in current times.
In 1960 14% of 333 research fields were related to AI (many in computer science), but this increased to over half of all research fields by 1972, over 80% by 1986 and over 98% in current times.
We conclude that the context of the current surge appears different, and that interdisciplinary AI application is likely to be sustained.
arXiv Detail & Related papers (2023-06-15T14:08:07Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Artificial Intelligence and Life in 2030: The One Hundred Year Study on
Artificial Intelligence [74.2630823914258]
The report examines eight domains of typical urban settings on which AI is likely to have impact over the coming years.
It aims to provide the general public with a scientifically and technologically accurate portrayal of the current state of AI.
The charge for this report was given to the panel by the AI100 Standing Committee, chaired by Barbara Grosz of Harvard University.
arXiv Detail & Related papers (2022-10-31T18:35:36Z) - Forecasting AI Progress: Evidence from a Survey of Machine Learning
Researchers [0.0]
We report the results from a large survey of AI and machine learning (ML) researchers on their beliefs about progress in AI.
In aggregate, AI/ML researchers surveyed placed a 50% likelihood of human-level machine intelligence being achieved by 2060.
Forecasts of several near-term AI milestones have reduced in time, suggesting more optimism about AI progress.
arXiv Detail & Related papers (2022-06-08T19:05:12Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.