The Hall of AI Fears and Hopes: Comparing the Views of AI Influencers and those of Members of the U.S. Public Through an Interactive Platform
- URL: http://arxiv.org/abs/2504.06016v1
- Date: Tue, 08 Apr 2025 13:21:31 GMT
- Title: The Hall of AI Fears and Hopes: Comparing the Views of AI Influencers and those of Members of the U.S. Public Through an Interactive Platform
- Authors: Gustavo Moreira, Edyta Paulina Bogucka, Marios Constantinides, Daniele Quercia,
- Abstract summary: The public fears AI getting out of control, while influencers emphasize regulation.<n>The views of AI influencers from underrepresented groups such as women and people of color often differ from the views of underrepresented groups in the public.
- Score: 3.253198855869374
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: AI development is shaped by academics and industry leaders - let us call them ``influencers'' - but it is unclear how their views align with those of the public. To address this gap, we developed an interactive platform that served as a data collection tool for exploring public views on AI, including their fears, hopes, and overall sense of hopefulness. We made the platform available to 330 participants representative of the U.S. population in terms of age, sex, ethnicity, and political leaning, and compared their views with those of 100 AI influencers identified by Time magazine. The public fears AI getting out of control, while influencers emphasize regulation, seemingly to deflect attention from their alleged focus on monetizing AI's potential. Interestingly, the views of AI influencers from underrepresented groups such as women and people of color often differ from the views of underrepresented groups in the public.
Related papers
- What do people expect from Artificial Intelligence? Public opinion on alignment in AI moderation from Germany and the United States [0.0]
We present evidence from two surveys of public preferences for key functional features of AI-enabled systems in Germany and the United States.
We examine support for four types of alignment in AI moderation: accuracy and reliability, safety, bias mitigation, and the promotion of aspirational imaginaries.
In both countries, accuracy and safety enjoy the strongest support, while more normatively charged goals -- like fairness and aspirational imaginaries -- receive more cautious backing.
arXiv Detail & Related papers (2025-04-16T20:27:03Z) - Artificial Intelligence in Deliberation: The AI Penalty and the Emergence of a New Deliberative Divide [0.0]
Digital deliberation has expanded democratic participation, yet challenges remain.<n>Recent advances in artificial intelligence (AI) offer potential solutions, but public perceptions of AI's role in deliberation remain underexplored.<n>If AI is integrated into deliberation, public trust, acceptance, and willingness to participate may be affected.
arXiv Detail & Related papers (2025-03-10T16:33:15Z) - Biased AI can Influence Political Decision-Making [64.9461133083473]
This paper presents two experiments investigating the effects of partisan bias in AI language models on political decision-making.
We found that participants exposed to politically biased models were significantly more likely to adopt opinions and make decisions aligning with the AI's bias.
arXiv Detail & Related papers (2024-10-08T22:56:00Z) - From Experts to the Public: Governing Multimodal Language Models in Politically Sensitive Video Analysis [48.14390493099495]
This paper examines the governance of large language models (MM-LLMs) through individual and collective deliberation.
We conducted a two-step study: first, interviews with 10 journalists established a baseline understanding of expert video interpretation; second, 114 individuals from the general public engaged in deliberation using Inclusive.AI.
arXiv Detail & Related papers (2024-09-15T03:17:38Z) - Artificial Intelligence in Election Campaigns: Perceptions, Penalties, and Implications [44.99833362998488]
We identify three categories of AI use -- campaign operations, voter outreach, and deception.<n>While people generally dislike AI in campaigns, they are especially critical of deceptive uses, which they perceive as norm violations.<n>Deception AI use increases public support for stricter AI regulation, including calls for an outright ban on AI development.
arXiv Detail & Related papers (2024-08-08T12:58:20Z) - Public Perception of AI: Sentiment and Opportunity [0.0]
We present results of public perception of AI from a survey conducted with 10,000 respondents across ten countries in four continents around the world.
Results show that currently an equal percentage of respondents who believe AI will change the world as we know it, also believe AI needs to be heavily regulated.
arXiv Detail & Related papers (2024-07-22T19:11:28Z) - Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
General purpose AI seems to have lowered the barriers for the public to use AI and harness its power.
We introduce PARTICIP-AI, a framework for laypeople to speculate and assess AI use cases and their impacts.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - Public Perception of Generative AI on Twitter: An Empirical Study Based
on Occupation and Usage [7.18819534653348]
This paper investigates users' perceptions of generative AI using 3M posts on Twitter from January 2019 to March 2023.
We find that people across various occupations, not just IT-related ones, show a strong interest in generative AI.
After the release of ChatGPT, people's interest in AI in general has increased dramatically.
arXiv Detail & Related papers (2023-05-16T15:30:12Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - FATE in AI: Towards Algorithmic Inclusivity and Accessibility [0.0]
To prevent algorithmic disparities, fairness, accountability, transparency, and ethics (FATE) in AI are being implemented.
This study examines FATE-related desiderata, particularly transparency and ethics, in areas of the global South that are underserved by AI.
To promote inclusivity, a community-led strategy is proposed to collect and curate representative data for responsible AI design.
arXiv Detail & Related papers (2023-01-03T15:08:10Z) - How Different Groups Prioritize Ethical Values for Responsible AI [75.40051547428592]
Private companies, public sector organizations, and academic groups have outlined ethical values they consider important for responsible AI technologies.
While their recommendations converge on a set of central values, little is known about the values a more representative public would find important for the AI technologies they interact with and might be affected by.
We conducted a survey examining how individuals perceive and prioritize responsible AI values across three groups.
arXiv Detail & Related papers (2022-05-16T14:39:37Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.