What Do People Think about Sentient AI?
- URL: http://arxiv.org/abs/2407.08867v2
- Date: Mon, 15 Jul 2024 09:39:44 GMT
- Title: What Do People Think about Sentient AI?
- Authors: Jacy Reese Anthis, Janet V. T. Pauketat, Ali Ladak, Aikaterina Manoli,
- Abstract summary: We present the first nationally representative survey data on the topic of sentient AI.
Across one wave of data collection in 2021 and two in 2023, we found mind perception and moral concern for AI well-being was higher than predicted.
We argue that, whether or not AIs become sentient, the discussion itself may overhaul human-computer interaction.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With rapid advances in machine learning, many people in the field have been discussing the rise of digital minds and the possibility of artificial sentience. Future developments in AI capabilities and safety will depend on public opinion and human-AI interaction. To begin to fill this research gap, we present the first nationally representative survey data on the topic of sentient AI: initial results from the Artificial Intelligence, Morality, and Sentience (AIMS) survey, a preregistered and longitudinal study of U.S. public opinion that began in 2021. Across one wave of data collection in 2021 and two in 2023 (total N = 3,500), we found mind perception and moral concern for AI well-being in 2021 were higher than predicted and significantly increased in 2023: for example, 71% agree sentient AI deserve to be treated with respect, and 38% support legal rights. People have become more threatened by AI, and there is widespread opposition to new technologies: 63% support a ban on smarter-than-human AI, and 69% support a ban on sentient AI. Expected timelines are surprisingly short and shortening with a median forecast of sentient AI in only five years and artificial general intelligence in only two years. We argue that, whether or not AIs become sentient, the discussion itself may overhaul human-computer interaction and shape the future trajectory of AI technologies, including existential risks and opportunities.
Related papers
- Human Bias in the Face of AI: The Role of Human Judgement in AI Generated Text Evaluation [48.70176791365903]
This study explores how bias shapes the perception of AI versus human generated content.
We investigated how human raters respond to labeled and unlabeled content.
arXiv Detail & Related papers (2024-09-29T04:31:45Z) - Thousands of AI Authors on the Future of AI [1.0717301750064765]
Most respondents expressed substantial uncertainty about the long-term value of AI progress.
More than half suggested that "substantial" or "extreme" concern is warranted about six different AI-related scenarios.
There was disagreement about whether faster or slower AI progress would be better for the future of humanity.
arXiv Detail & Related papers (2024-01-05T14:53:09Z) - Artificial intelligence adoption in the physical sciences, natural
sciences, life sciences, social sciences and the arts and humanities: A
bibliometric analysis of research publications from 1960-2021 [73.06361680847708]
In 1960 14% of 333 research fields were related to AI, but this increased to over half of all research fields by 1972, over 80% by 1986 and over 98% in current times.
In 1960 14% of 333 research fields were related to AI (many in computer science), but this increased to over half of all research fields by 1972, over 80% by 1986 and over 98% in current times.
We conclude that the context of the current surge appears different, and that interdisciplinary AI application is likely to be sustained.
arXiv Detail & Related papers (2023-06-15T14:08:07Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Artificial Intelligence and Life in 2030: The One Hundred Year Study on
Artificial Intelligence [74.2630823914258]
The report examines eight domains of typical urban settings on which AI is likely to have impact over the coming years.
It aims to provide the general public with a scientifically and technologically accurate portrayal of the current state of AI.
The charge for this report was given to the panel by the AI100 Standing Committee, chaired by Barbara Grosz of Harvard University.
arXiv Detail & Related papers (2022-10-31T18:35:36Z) - Forecasting AI Progress: Evidence from a Survey of Machine Learning
Researchers [0.0]
We report the results from a large survey of AI and machine learning (ML) researchers on their beliefs about progress in AI.
In aggregate, AI/ML researchers surveyed placed a 50% likelihood of human-level machine intelligence being achieved by 2060.
Forecasts of several near-term AI milestones have reduced in time, suggesting more optimism about AI progress.
arXiv Detail & Related papers (2022-06-08T19:05:12Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Challenges of Artificial Intelligence -- From Machine Learning and
Computer Vision to Emotional Intelligence [0.0]
We believe that AI is a helper, not a ruler of humans.
Computer vision has been central to the development of AI.
Emotions are central to human intelligence, but little use has been made in AI.
arXiv Detail & Related papers (2022-01-05T06:00:22Z) - Making AI 'Smart': Bridging AI and Cognitive Science [0.0]
With the integration of cognitive science, the 'artificial' characteristic of Artificial Intelligence might soon be replaced with'smart'
This will help develop more powerful AI systems and simultaneously gives us a better understanding of how the human brain works.
We argue that the possibility of AI taking over human civilization is low as developing such an advanced system requires a better understanding of the human brain first.
arXiv Detail & Related papers (2021-12-31T09:30:44Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - A clarification of misconceptions, myths and desired status of
artificial intelligence [0.0]
We present a perspective on the desired and current status of AI in relation to machine learning and statistics.
Our discussion is intended to uncurtain the veil of vagueness surrounding AI to see its true countenance.
arXiv Detail & Related papers (2020-08-03T17:22:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.