Key Factors Affecting European Reactions to AI in European Full and
Flawed Democracies
- URL: http://arxiv.org/abs/2311.09231v1
- Date: Wed, 4 Oct 2023 22:11:28 GMT
- Title: Key Factors Affecting European Reactions to AI in European Full and
Flawed Democracies
- Authors: Long Pham, Barry O'Sullivan, Tai Tan Mai
- Abstract summary: This study examines the key factors that affect European reactions to artificial intelligence (AI) in the context of full and flawed democracies in Europe.
It is observed that flawed democracies tend to exhibit higher levels of trust in government entities compared to their counterparts in full democracies.
Individuals residing in flawed democracies demonstrate a more positive attitude toward AI when compared to respondents from full democracies.
- Score: 1.104960878651584
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This study examines the key factors that affect European reactions to
artificial intelligence (AI) in the context of both full and flawed democracies
in Europe. Analysing a dataset of 4,006 respondents, categorised into full
democracies and flawed democracies based on the Democracy Index developed by
the Economist Intelligence Unit (EIU), this research identifies crucial factors
that shape European attitudes toward AI in these two types of democracies. The
analysis reveals noteworthy findings. Firstly, it is observed that flawed
democracies tend to exhibit higher levels of trust in government entities
compared to their counterparts in full democracies. Additionally, individuals
residing in flawed democracies demonstrate a more positive attitude toward AI
when compared to respondents from full democracies. However, the study finds no
significant difference in AI awareness between the two types of democracies,
indicating a similar level of general knowledge about AI technologies among
European citizens. Moreover, the study reveals that trust in AI measures,
specifically "Trust AI Solution", does not significantly vary between full and
flawed democracies. This suggests that despite the differences in democratic
quality, both types of democracies have similar levels of confidence in AI
solutions.
Related papers
- Perceptions of Discriminatory Decisions of Artificial Intelligence: Unpacking the Role of Individual Characteristics [0.0]
Personal differences (digital self-efficacy, technical knowledge, belief in equality, political ideology) are associated with perceptions of AI outcomes.
Digital self-efficacy and technical knowledge are positively associated with attitudes toward AI.
Liberal ideologies are negatively associated with outcome trust, higher negative emotion, and greater skepticism.
arXiv Detail & Related papers (2024-10-17T06:18:26Z) - Biased AI can Influence Political Decision-Making [64.9461133083473]
This paper presents two experiments investigating the effects of partisan bias in AI language models on political decision-making.
We found that participants exposed to politically biased models were significantly more likely to adopt opinions and make decisions aligning with the AI's bias.
arXiv Detail & Related papers (2024-10-08T22:56:00Z) - From Experts to the Public: Governing Multimodal Language Models in Politically Sensitive Video Analysis [48.14390493099495]
This paper examines the governance of large language models (MM-LLMs) through individual and collective deliberation.
We conducted a two-step study: first, interviews with 10 journalists established a baseline understanding of expert video interpretation; second, 114 individuals from the general public engaged in deliberation using Inclusive.AI.
arXiv Detail & Related papers (2024-09-15T03:17:38Z) - How will advanced AI systems impact democracy? [16.944248678780614]
We discuss the impacts that generative artificial intelligence may have on democratic processes.
We ask how AI might be used to destabilise or support democratic mechanisms like elections.
Finally, we discuss whether AI will strengthen or weaken democratic principles.
arXiv Detail & Related papers (2024-08-27T12:05:59Z) - Representation Bias in Political Sample Simulations with Large Language Models [54.48283690603358]
This study seeks to identify and quantify biases in simulating political samples with Large Language Models.
Using the GPT-3.5-Turbo model, we leverage data from the American National Election Studies, German Longitudinal Election Study, Zuobiao dataset, and China Family Panel Studies.
arXiv Detail & Related papers (2024-07-16T05:52:26Z) - Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
General purpose AI seems to have lowered the barriers for the public to use AI and harness its power.
We introduce PARTICIP-AI, a framework for laypeople to speculate and assess AI use cases and their impacts.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - Artificial Intelligence across Europe: A Study on Awareness, Attitude
and Trust [39.35990066478082]
The aim of the study is to gain a better understanding of people's views and perceptions within the European context.
We design and validate a new questionnaire (PAICE) structured around three dimensions: people's awareness, attitude, and trust.
We highlight implicit contradictions and identify trends that may interfere with the creation of an ecosystem of trust.
arXiv Detail & Related papers (2023-08-19T11:00:32Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Democratising AI: Multiple Meanings, Goals, and Methods [0.0]
Numerous parties are calling for the democratisation of AI, but the phrase is used to refer to a variety of goals, the pursuit of which sometimes conflict.
This paper identifies four kinds of AI democratisation that are commonly discussed.
Main takeaway is that AI democratisation is a multifarious and sometimes conflicting concept.
arXiv Detail & Related papers (2023-03-22T15:23:22Z) - Effect of Confidence and Explanation on Accuracy and Trust Calibration
in AI-Assisted Decision Making [53.62514158534574]
We study whether features that reveal case-specific model information can calibrate trust and improve the joint performance of the human and AI.
We show that confidence score can help calibrate people's trust in an AI model, but trust calibration alone is not sufficient to improve AI-assisted decision making.
arXiv Detail & Related papers (2020-01-07T15:33:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.