AI Perceptions Across Cultures: Similarities and Differences in Expectations, Risks, Benefits, Tradeoffs, and Value in Germany and China
- URL: http://arxiv.org/abs/2412.13841v1
- Date: Wed, 18 Dec 2024 13:34:44 GMT
- Title: AI Perceptions Across Cultures: Similarities and Differences in Expectations, Risks, Benefits, Tradeoffs, and Value in Germany and China
- Authors: Philipp Brauner, Felix Glawe, Gian Luca Liehner, Luisa Vervier, Martina Ziefle,
- Abstract summary: This study explores public mental models of AI using micro scenarios to assess reactions to 71 statements about AI's potential future impacts.
German participants tended toward more cautious assessments, whereas Chinese participants expressed greater optimism regarding AI's societal benefits.
- Score: 0.20971479389679332
- License:
- Abstract: As artificial intelligence (AI) continues to advance, understanding public perceptions -- including biases, risks, and benefits -- is critical for guiding research priorities, shaping public discourse, and informing policy. This study explores public mental models of AI using micro scenarios to assess reactions to 71 statements about AI's potential future impacts. Drawing on cross-cultural samples from Germany (N=52) and China (N=60), we identify significant differences in expectations, evaluations, and risk-utility tradeoffs. German participants tended toward more cautious assessments, whereas Chinese participants expressed greater optimism regarding AI's societal benefits. Chinese participants exhibited relatively balanced risk-benefit tradeoffs ($\beta=-0.463$ for risk and $\beta=+0.484$ for benefit, $r^2=.630$). In contrast, German participants showed a stronger emphasis on AI benefits and less on risks ($\beta=-0.337$ for risk and $\beta=+0.715$ for benefit, $r^2=.839$). Visual cognitive maps illustrate these contrasts, offering new perspectives on how cultural contexts shape AI acceptance. Our findings underline key factors influencing public perception and provide actionable insights for fostering equitable and culturally sensitive integration of AI technologies.
Related papers
- Human Decision-making is Susceptible to AI-driven Manipulation [71.20729309185124]
AI systems may exploit users' cognitive biases and emotional vulnerabilities to steer them toward harmful outcomes.
This study examined human susceptibility to such manipulation in financial and emotional decision-making contexts.
arXiv Detail & Related papers (2025-02-11T15:56:22Z) - Nationality, Race, and Ethnicity Biases in and Consequences of Detecting AI-Generated Self-Presentations [1.6772190302364975]
Content, such as linguistic style, played a dominant role in AI detection.
Asian and Hispanic applicants were more likely to be judged as AI users when labeled as domestic students.
arXiv Detail & Related papers (2024-12-24T18:31:44Z) - Misalignments in AI Perception: Quantitative Findings and Visual Mapping of How Experts and the Public Differ in Expectations and Risks, Benefits, and Value Judgments [0.20971479389679332]
This study examines how the general public and academic AI experts perceive AI's capabilities and impact across 71 scenarios.
Participants evaluated each scenario on four dimensions: expected probability, perceived risk and benefit, and overall sentiment (or value)
The findings reveal significant quantitative differences: experts anticipate higher probabilities, perceive lower risks, report greater utility, and express more favorable sentiment toward AI compared to the non-experts.
arXiv Detail & Related papers (2024-12-02T12:51:45Z) - Mapping Public Perception of Artificial Intelligence: Expectations, Risk-Benefit Tradeoffs, and Value As Determinants for Societal Acceptance [0.20971479389679332]
Using a representative sample of 1100 participants from Germany, this study examines mental models of AI.
Participants quantitatively evaluated 71 statements about AI's future capabilities.
We present rankings of these projections alongside visual mappings illustrating public risk-benefit tradeoffs.
arXiv Detail & Related papers (2024-11-28T20:03:01Z) - Raising the Stakes: Performance Pressure Improves AI-Assisted Decision Making [57.53469908423318]
We show the effects of performance pressure on AI advice reliance when laypeople complete a common AI-assisted task.
We find that when the stakes are high, people use AI advice more appropriately than when stakes are lower, regardless of the presence of an AI explanation.
arXiv Detail & Related papers (2024-10-21T22:39:52Z) - Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
General purpose AI seems to have lowered the barriers for the public to use AI and harness its power.
We introduce PARTICIP-AI, a framework for laypeople to speculate and assess AI use cases and their impacts.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - The Impact of Generative Artificial Intelligence on Market Equilibrium: Evidence from a Natural Experiment [19.963531237647103]
Generative artificial intelligence (AI) exhibits the capability to generate creative content akin to human output with greater efficiency and reduced costs.
This paper empirically investigates the impact of generative AI on market equilibrium, in the context of China's leading art outsourcing platform.
Our analysis shows that the advent of generative AI led to a 64% reduction in average prices, yet it simultaneously spurred a 121% increase in order volume and a 56% increase in overall revenue.
arXiv Detail & Related papers (2023-11-13T04:31:53Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - How Different Groups Prioritize Ethical Values for Responsible AI [75.40051547428592]
Private companies, public sector organizations, and academic groups have outlined ethical values they consider important for responsible AI technologies.
While their recommendations converge on a set of central values, little is known about the values a more representative public would find important for the AI technologies they interact with and might be affected by.
We conducted a survey examining how individuals perceive and prioritize responsible AI values across three groups.
arXiv Detail & Related papers (2022-05-16T14:39:37Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.