Key Factors Affecting European Reactions to AI in European Full and
Flawed Democracies
- URL: http://arxiv.org/abs/2311.09231v1
- Date: Wed, 4 Oct 2023 22:11:28 GMT
- Title: Key Factors Affecting European Reactions to AI in European Full and
Flawed Democracies
- Authors: Long Pham, Barry O'Sullivan, Tai Tan Mai
- Abstract summary: This study examines the key factors that affect European reactions to artificial intelligence (AI) in the context of full and flawed democracies in Europe.
It is observed that flawed democracies tend to exhibit higher levels of trust in government entities compared to their counterparts in full democracies.
Individuals residing in flawed democracies demonstrate a more positive attitude toward AI when compared to respondents from full democracies.
- Score: 1.104960878651584
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This study examines the key factors that affect European reactions to
artificial intelligence (AI) in the context of both full and flawed democracies
in Europe. Analysing a dataset of 4,006 respondents, categorised into full
democracies and flawed democracies based on the Democracy Index developed by
the Economist Intelligence Unit (EIU), this research identifies crucial factors
that shape European attitudes toward AI in these two types of democracies. The
analysis reveals noteworthy findings. Firstly, it is observed that flawed
democracies tend to exhibit higher levels of trust in government entities
compared to their counterparts in full democracies. Additionally, individuals
residing in flawed democracies demonstrate a more positive attitude toward AI
when compared to respondents from full democracies. However, the study finds no
significant difference in AI awareness between the two types of democracies,
indicating a similar level of general knowledge about AI technologies among
European citizens. Moreover, the study reveals that trust in AI measures,
specifically "Trust AI Solution", does not significantly vary between full and
flawed democracies. This suggests that despite the differences in democratic
quality, both types of democracies have similar levels of confidence in AI
solutions.
Related papers
- Representation Bias in Political Sample Simulations with Large Language Models [54.48283690603358]
This study seeks to identify and quantify biases in simulating political samples with Large Language Models.
Using the GPT-3.5-Turbo model, we leverage data from the American National Election Studies, German Longitudinal Election Study, Zuobiao dataset, and China Family Panel Studies.
arXiv Detail & Related papers (2024-07-16T05:52:26Z) - Generative AI Voting: Fair Collective Choice is Resilient to LLM Biases and Inconsistencies [21.444936180683147]
We show that different LLMs come with biases and significant inconsistencies in complex preferential ballot formats.
Strikingly, fair voting aggregation methods, such as equal shares, prove to be a win-win: fairer voting outcomes for humans with fairer AI representation.
arXiv Detail & Related papers (2024-05-31T01:41:48Z) - Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
Particip-AI is a framework to gather current and future AI use cases and their harms and benefits from non-expert public.
We gather responses from 295 demographically diverse participants.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - On the meaning of uncertainty for ethical AI: philosophy and practice [10.591284030838146]
We argue that this is a significant way to bring ethical considerations into mathematical reasoning.
We demonstrate these ideas within the context of competing models used to advise the UK government on the spread of the Omicron variant of COVID-19 during December 2021.
arXiv Detail & Related papers (2023-09-11T15:13:36Z) - Artificial Intelligence across Europe: A Study on Awareness, Attitude
and Trust [39.35990066478082]
The aim of the study is to gain a better understanding of people's views and perceptions within the European context.
We design and validate a new questionnaire (PAICE) structured around three dimensions: people's awareness, attitude, and trust.
We highlight implicit contradictions and identify trends that may interfere with the creation of an ecosystem of trust.
arXiv Detail & Related papers (2023-08-19T11:00:32Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Democratising AI: Multiple Meanings, Goals, and Methods [0.0]
Numerous parties are calling for the democratisation of AI, but the phrase is used to refer to a variety of goals, the pursuit of which sometimes conflict.
This paper identifies four kinds of AI democratisation that are commonly discussed.
Main takeaway is that AI democratisation is a multifarious and sometimes conflicting concept.
arXiv Detail & Related papers (2023-03-22T15:23:22Z) - How Different Groups Prioritize Ethical Values for Responsible AI [75.40051547428592]
Private companies, public sector organizations, and academic groups have outlined ethical values they consider important for responsible AI technologies.
While their recommendations converge on a set of central values, little is known about the values a more representative public would find important for the AI technologies they interact with and might be affected by.
We conducted a survey examining how individuals perceive and prioritize responsible AI values across three groups.
arXiv Detail & Related papers (2022-05-16T14:39:37Z) - The Who in XAI: How AI Background Shapes Perceptions of AI Explanations [61.49776160925216]
We conduct a mixed-methods study of how two different groups--people with and without AI background--perceive different types of AI explanations.
We find that (1) both groups showed unwarranted faith in numbers for different reasons and (2) each group found value in different explanations beyond their intended design.
arXiv Detail & Related papers (2021-07-28T17:32:04Z) - Artificial Intelligence for EU Decision-Making. Effects on Citizens
Perceptions of Input, Throughput and Output Legitimacy [0.0]
Lack of political legitimacy undermines the ability of the European Union to resolve major crises.
By integrating digital data into political processes, the EU seeks to base decision-making increasingly on sound empirical evidence.
This paper investigates how citizens perceptions of EU input, throughput, and output legitimacy are influenced by three decision-making arrangements.
arXiv Detail & Related papers (2020-03-25T10:56:28Z) - Effect of Confidence and Explanation on Accuracy and Trust Calibration
in AI-Assisted Decision Making [53.62514158534574]
We study whether features that reveal case-specific model information can calibrate trust and improve the joint performance of the human and AI.
We show that confidence score can help calibrate people's trust in an AI model, but trust calibration alone is not sufficient to improve AI-assisted decision making.
arXiv Detail & Related papers (2020-01-07T15:33:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.