Trust, Experience, and Innovation: Key Factors Shaping American Attitudes About AI
- URL: http://arxiv.org/abs/2503.05815v1
- Date: Tue, 04 Mar 2025 16:08:20 GMT
- Title: Trust, Experience, and Innovation: Key Factors Shaping American Attitudes About AI
- Authors: Risa Palm, Justin Kingsland, Toby Bolsen,
- Abstract summary: The paper explores the degree of concern regarding specific potential outcomes of the new advances in AI technology.<n>Key variables associated with the direction and intensity of concern include prior experience using a large language model such as ChatGPT.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A large survey of American adults explored the complex landscape of attitudes towards artificial intelligence (AI). It explored the degree of concern regarding specific potential outcomes of the new advances in AI technology and correlates of these concerns. Key variables associated with the direction and intensity of concern include prior experience using a large language model such as ChatGPT, general trust in science, adherence to the precautionary principle versus support for unrestricted innovation, and demographic factors such as gender. By analyzing these relationships, the paper provides valuable insights into the American public's response to AI that are particularly important in the development of policy to regulate or further encourage its development.
Related papers
- Information Retrieval in the Age of Generative AI: The RGB Model [77.96475639967431]
This paper presents a novel quantitative approach to shed light on the complex information dynamics arising from the growing use of generative AI tools.
We propose a model to characterize the generation, indexing, and dissemination of information in response to new topics.
Our findings suggest that the rapid pace of generative AI adoption, combined with increasing user reliance, can outpace human verification, escalating the risk of inaccurate information proliferation.
arXiv Detail & Related papers (2025-04-29T10:21:40Z) - Media and responsible AI governance: a game-theoretic and LLM analysis [61.132523071109354]
This paper investigates the interplay between AI developers, regulators, users, and the media in fostering trustworthy AI systems.
Using evolutionary game theory and large language models (LLMs), we model the strategic interactions among these actors under different regulatory regimes.
arXiv Detail & Related papers (2025-03-12T21:39:38Z) - AI Governance in the Context of the EU AI Act: A Bibliometric and Literature Review Approach [0.0]
This study analyzed the research trends in AI governance within the framework of the EU AI Act.<n>Our findings reveal that research on AI governance, particularly concerning AI systems regulated by the EU AI Act, remains relatively limited compared to the broader AI research landscape.
arXiv Detail & Related papers (2025-01-08T11:01:11Z) - Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
General purpose AI seems to have lowered the barriers for the public to use AI and harness its power.
We introduce PARTICIP-AI, a framework for laypeople to speculate and assess AI use cases and their impacts.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - Position Paper: Agent AI Towards a Holistic Intelligence [53.35971598180146]
We emphasize developing Agent AI -- an embodied system that integrates large foundation models into agent actions.
In this paper, we propose a novel large action model to achieve embodied intelligent behavior, the Agent Foundation Model.
arXiv Detail & Related papers (2024-02-28T16:09:56Z) - Exploring Public Opinion on Responsible AI Through The Lens of Cultural
Consensus Theory [0.1813006808606333]
We applied Cultural Consensus Theory to a nationally representative survey dataset on various aspects of AI.
Our results offer valuable insights by identifying shared and contrasting views on responsible AI.
arXiv Detail & Related papers (2024-01-06T20:57:35Z) - From Google Gemini to OpenAI Q* (Q-Star): A Survey of Reshaping the
Generative Artificial Intelligence (AI) Research Landscape [5.852005817069381]
The study critically examined the current state and future trajectory of generative Artificial Intelligence (AI)
It explored how innovations like Google's Gemini and the anticipated OpenAI Q* project are reshaping research priorities and applications across various domains.
The study highlighted the importance of incorporating ethical and human-centric methods in AI development, ensuring alignment with societal norms and welfare.
arXiv Detail & Related papers (2023-12-18T01:11:39Z) - Exploration with Principles for Diverse AI Supervision [88.61687950039662]
Training large transformers using next-token prediction has given rise to groundbreaking advancements in AI.
While this generative AI approach has produced impressive results, it heavily leans on human supervision.
This strong reliance on human oversight poses a significant hurdle to the advancement of AI innovation.
We propose a novel paradigm termed Exploratory AI (EAI) aimed at autonomously generating high-quality training data.
arXiv Detail & Related papers (2023-10-13T07:03:39Z) - Predictable Artificial Intelligence [77.1127726638209]
This paper introduces the ideas and challenges of Predictable AI.<n>It explores the ways in which we can anticipate key validity indicators of present and future AI ecosystems.<n>We argue that achieving predictability is crucial for fostering trust, liability, control, alignment and safety of AI ecosystems.
arXiv Detail & Related papers (2023-10-09T21:36:21Z) - Artificial Intelligence across Europe: A Study on Awareness, Attitude
and Trust [39.35990066478082]
The aim of the study is to gain a better understanding of people's views and perceptions within the European context.
We design and validate a new questionnaire (PAICE) structured around three dimensions: people's awareness, attitude, and trust.
We highlight implicit contradictions and identify trends that may interfere with the creation of an ecosystem of trust.
arXiv Detail & Related papers (2023-08-19T11:00:32Z) - How Different Groups Prioritize Ethical Values for Responsible AI [75.40051547428592]
Private companies, public sector organizations, and academic groups have outlined ethical values they consider important for responsible AI technologies.
While their recommendations converge on a set of central values, little is known about the values a more representative public would find important for the AI technologies they interact with and might be affected by.
We conducted a survey examining how individuals perceive and prioritize responsible AI values across three groups.
arXiv Detail & Related papers (2022-05-16T14:39:37Z) - Exciting, Useful, Worrying, Futuristic: Public Perception of Artificial
Intelligence in 8 Countries [2.6202699243422023]
We present results of an in-depth survey of public opinion of artificial intelligence conducted with 10,005 respondents spanning eight countries and six continents.
We report widespread perception that AI will have significant impact on society, accompanied by strong support for the responsible development and use of AI.
arXiv Detail & Related papers (2019-12-27T10:27:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.