Reduced AI Acceptance After the Generative AI Boom: Evidence From a Two-Wave Survey Study
- URL: http://arxiv.org/abs/2510.23578v1
- Date: Mon, 27 Oct 2025 17:47:58 GMT
- Title: Reduced AI Acceptance After the Generative AI Boom: Evidence From a Two-Wave Survey Study
- Authors: Joachim Baumann, Aleksandra Urman, Ulrich Leicht-Deobald, Zachary J. Roman, Anikó Hannák, Markus Christen,
- Abstract summary: We examine shifts in public attitudes toward AI before and after the launch of ChatGPT.<n>The proportion of respondents finding AI "not acceptable at all" increased from 23% to 30%.<n>These shifts have amplified existing social inequalities in terms of widened educational, linguistic, and gender gaps post-boom.
- Score: 40.26767449635043
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The rapid adoption of generative artificial intelligence (GenAI) technologies has led many organizations to integrate AI into their products and services, often without considering user preferences. Yet, public attitudes toward AI use, especially in impactful decision-making scenarios, are underexplored. Using a large-scale two-wave survey study (n_wave1=1514, n_wave2=1488) representative of the Swiss population, we examine shifts in public attitudes toward AI before and after the launch of ChatGPT. We find that the GenAI boom is significantly associated with reduced public acceptance of AI (see Figure 1) and increased demand for human oversight in various decision-making contexts. The proportion of respondents finding AI "not acceptable at all" increased from 23% to 30%, while support for human-only decision-making rose from 18% to 26%. These shifts have amplified existing social inequalities in terms of widened educational, linguistic, and gender gaps post-boom. Our findings challenge industry assumptions about public readiness for AI deployment and highlight the critical importance of aligning technological development with evolving public preferences.
Related papers
- Subjective Experience in AI Systems: What Do AI Researchers and the Public Believe? [0.42131793931438133]
We surveyed 582 AI researchers and 838 nationally representative US participants about their views on the potential development of AI systems with subjective experience.<n>When asked to estimate the chances that such systems will exist on specific dates, the median responses were 1% (AI researchers) and 5% (public) by 2024.<n>The median member of the public thought there was a higher chance that AI systems with subjective experience would never exist (25%) than the median AI researcher did (10%).
arXiv Detail & Related papers (2025-06-13T16:53:28Z) - Longitudinal Study on Social and Emotional Use of AI Conversational Agent [12.951074799361994]
We studied the impact of four commercially available AI tools on users' perceived attachment towards AI and AI empathy.<n>Our findings underscore the importance of developing consumer-facing AI tools that support emotional well-being responsibly.
arXiv Detail & Related papers (2025-04-19T00:03:48Z) - What do people expect from Artificial Intelligence? Public opinion on alignment in AI moderation from Germany and the United States [0.0]
We present evidence from two surveys of public preferences for key functional features of AI-enabled systems in Germany and the United States.<n>We examine support for four types of alignment in AI moderation: accuracy and reliability, safety, bias mitigation, and the promotion of aspirational imaginaries.<n>In both countries, accuracy and safety enjoy the strongest support, while more normatively charged goals -- like fairness and aspirational imaginaries -- receive more cautious backing.
arXiv Detail & Related papers (2025-04-16T20:27:03Z) - When Will AI Transform Society? Swedish Public Predictions on AI Development Timelines [0.0]
This study investigates public expectations regarding the likelihood and timing of major artificial intelligence (AI) developments among Swedes.<n>We examined expectations across six key scenarios: medical breakthroughs, mass unemployment, democratic deterioration, living standard improvements, artificial general intelligence (AGI) and uncontrollable superintelligent AI.<n>Findings reveal strong consensus on AI-driven medical breakthroughs (82.6%), while expectations for other major developments are significantly lower.
arXiv Detail & Related papers (2025-04-05T13:57:04Z) - Hype, Sustainability, and the Price of the Bigger-is-Better Paradigm in AI [67.58673784790375]
We argue that the 'bigger is better' AI paradigm is not only fragile scientifically, but comes with undesirable consequences.<n>First, it is not sustainable, as, despite efficiency improvements, its compute demands increase faster than model performance.<n>Second, it implies focusing on certain problems at the expense of others, leaving aside important applications, e.g. health, education, or the climate.
arXiv Detail & Related papers (2024-09-21T14:43:54Z) - Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
General purpose AI seems to have lowered the barriers for the public to use AI and harness its power.
We introduce PARTICIP-AI, a framework for laypeople to speculate and assess AI use cases and their impacts.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - AI for social science and social science of AI: A Survey [47.5235291525383]
Recent advancements in artificial intelligence have sparked a rethinking of artificial general intelligence possibilities.
The increasing human-like capabilities of AI are also attracting attention in social science research.
arXiv Detail & Related papers (2024-01-22T10:57:09Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - FATE in AI: Towards Algorithmic Inclusivity and Accessibility [0.0]
To prevent algorithmic disparities, fairness, accountability, transparency, and ethics (FATE) in AI are being implemented.
This study examines FATE-related desiderata, particularly transparency and ethics, in areas of the global South that are underserved by AI.
To promote inclusivity, a community-led strategy is proposed to collect and curate representative data for responsible AI design.
arXiv Detail & Related papers (2023-01-03T15:08:10Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.