Perceptions of Sentient AI and Other Digital Minds: Evidence from the AI, Morality, and Sentience (AIMS) Survey
- URL: http://arxiv.org/abs/2407.08867v3
- Date: Mon, 10 Mar 2025 17:10:28 GMT
- Title: Perceptions of Sentient AI and Other Digital Minds: Evidence from the AI, Morality, and Sentience (AIMS) Survey
- Authors: Jacy Reese Anthis, Janet V. T. Pauketat, Ali Ladak, Aikaterina Manoli,
- Abstract summary: One in five U.S. adults believed some AI systems are currently sentient, and 38% supported legal rights for sentient AI.<n>The median 2023 forecast was that sentient AI would arrive in just five years.<n>The development of safe and beneficial AI requires not just technical study but understanding the complex ways in which humans perceive and coexist with digital minds.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Humans now interact with a variety of digital minds, AI systems that appear to have mental faculties such as reasoning, emotion, and agency, and public figures are discussing the possibility of sentient AI. We present initial results from 2021 and 2023 for the nationally representative AI, Morality, and Sentience (AIMS) survey (N = 3,500). Mind perception and moral concern for AI welfare were surprisingly high and significantly increased: in 2023, one in five U.S. adults believed some AI systems are currently sentient, and 38% supported legal rights for sentient AI. People became more opposed to building digital minds: in 2023, 63% supported banning smarter-than-human AI, and 69% supported banning sentient AI. The median 2023 forecast was that sentient AI would arrive in just five years. The development of safe and beneficial AI requires not just technical study but understanding the complex ways in which humans perceive and coexist with digital minds.
Related papers
- Almost AI, Almost Human: The Challenge of Detecting AI-Polished Writing [55.2480439325792]
Misclassification can lead to false plagiarism accusations and misleading claims about AI prevalence in online content.
We systematically evaluate eleven state-of-the-art AI-text detectors using our AI-Polished-Text Evaluation dataset.
Our findings reveal that detectors frequently misclassify even minimally polished text as AI-generated, struggle to differentiate between degrees of AI involvement, and exhibit biases against older and smaller models.
arXiv Detail & Related papers (2025-02-21T18:45:37Z) - The Societal Response to Potentially Sentient AI [0.0]
Currently, public skepticism about AI sentience remains high.
As AI systems advance and become increasingly skilled at human-like interactions, public attitudes may shift.
A key question is whether public beliefs about AI sentience will diverge from expert opinions.
arXiv Detail & Related papers (2025-02-01T10:22:04Z) - The AI Double Standard: Humans Judge All AIs for the Actions of One [0.0]
As AI proliferates, perceptions may become entangled via the moral spillover of attitudes towards one AI to attitudes towards other AIs.
We tested how the seemingly harmful and immoral actions of an AI or human agent spill over to attitudes towards other AIs or humans in two preregistered experiments.
arXiv Detail & Related papers (2024-12-08T19:26:52Z) - Human Bias in the Face of AI: The Role of Human Judgement in AI Generated Text Evaluation [48.70176791365903]
This study explores how bias shapes the perception of AI versus human generated content.
We investigated how human raters respond to labeled and unlabeled content.
arXiv Detail & Related papers (2024-09-29T04:31:45Z) - On the consistent reasoning paradox of intelligence and optimal trust in AI: The power of 'I don't know' [79.69412622010249]
Consistent reasoning, which lies at the core of human intelligence, is the ability to handle tasks that are equivalent.
CRP asserts that consistent reasoning implies fallibility -- in particular, human-like intelligence in AI necessarily comes with human-like fallibility.
arXiv Detail & Related papers (2024-08-05T10:06:53Z) - Navigating AI Fallibility: Examining People's Reactions and Perceptions of AI after Encountering Personality Misrepresentations [7.256711790264119]
Hyper-personalized AI systems profile people's characteristics to provide personalized recommendations.
These systems are not immune to errors when making inferences about people's most personal traits.
We present two studies to examine how people react and perceive AI after encountering personality misrepresentations.
arXiv Detail & Related papers (2024-05-25T21:27:15Z) - Thousands of AI Authors on the Future of AI [1.0717301750064765]
Most respondents expressed substantial uncertainty about the long-term value of AI progress.
More than half suggested that "substantial" or "extreme" concern is warranted about six different AI-related scenarios.
There was disagreement about whether faster or slower AI progress would be better for the future of humanity.
arXiv Detail & Related papers (2024-01-05T14:53:09Z) - Artificial intelligence adoption in the physical sciences, natural
sciences, life sciences, social sciences and the arts and humanities: A
bibliometric analysis of research publications from 1960-2021 [73.06361680847708]
In 1960 14% of 333 research fields were related to AI, but this increased to over half of all research fields by 1972, over 80% by 1986 and over 98% in current times.
In 1960 14% of 333 research fields were related to AI (many in computer science), but this increased to over half of all research fields by 1972, over 80% by 1986 and over 98% in current times.
We conclude that the context of the current surge appears different, and that interdisciplinary AI application is likely to be sustained.
arXiv Detail & Related papers (2023-06-15T14:08:07Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Artificial Intelligence and Life in 2030: The One Hundred Year Study on
Artificial Intelligence [74.2630823914258]
The report examines eight domains of typical urban settings on which AI is likely to have impact over the coming years.
It aims to provide the general public with a scientifically and technologically accurate portrayal of the current state of AI.
The charge for this report was given to the panel by the AI100 Standing Committee, chaired by Barbara Grosz of Harvard University.
arXiv Detail & Related papers (2022-10-31T18:35:36Z) - Forecasting AI Progress: Evidence from a Survey of Machine Learning
Researchers [0.0]
We report the results from a large survey of AI and machine learning (ML) researchers on their beliefs about progress in AI.
In aggregate, AI/ML researchers surveyed placed a 50% likelihood of human-level machine intelligence being achieved by 2060.
Forecasts of several near-term AI milestones have reduced in time, suggesting more optimism about AI progress.
arXiv Detail & Related papers (2022-06-08T19:05:12Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Challenges of Artificial Intelligence -- From Machine Learning and
Computer Vision to Emotional Intelligence [0.0]
We believe that AI is a helper, not a ruler of humans.
Computer vision has been central to the development of AI.
Emotions are central to human intelligence, but little use has been made in AI.
arXiv Detail & Related papers (2022-01-05T06:00:22Z) - Making AI 'Smart': Bridging AI and Cognitive Science [0.0]
With the integration of cognitive science, the 'artificial' characteristic of Artificial Intelligence might soon be replaced with'smart'
This will help develop more powerful AI systems and simultaneously gives us a better understanding of how the human brain works.
We argue that the possibility of AI taking over human civilization is low as developing such an advanced system requires a better understanding of the human brain first.
arXiv Detail & Related papers (2021-12-31T09:30:44Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - A clarification of misconceptions, myths and desired status of
artificial intelligence [0.0]
We present a perspective on the desired and current status of AI in relation to machine learning and statistics.
Our discussion is intended to uncurtain the veil of vagueness surrounding AI to see its true countenance.
arXiv Detail & Related papers (2020-08-03T17:22:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.