Artificial Intelligence for EU Decision-Making. Effects on Citizens
Perceptions of Input, Throughput and Output Legitimacy
- URL: http://arxiv.org/abs/2003.11320v1
- Date: Wed, 25 Mar 2020 10:56:28 GMT
- Title: Artificial Intelligence for EU Decision-Making. Effects on Citizens
Perceptions of Input, Throughput and Output Legitimacy
- Authors: Christopher Starke, Marco Luenich
- Abstract summary: Lack of political legitimacy undermines the ability of the European Union to resolve major crises.
By integrating digital data into political processes, the EU seeks to base decision-making increasingly on sound empirical evidence.
This paper investigates how citizens perceptions of EU input, throughput, and output legitimacy are influenced by three decision-making arrangements.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A lack of political legitimacy undermines the ability of the European Union
to resolve major crises and threatens the stability of the system as a whole.
By integrating digital data into political processes, the EU seeks to base
decision-making increasingly on sound empirical evidence. In particular,
artificial intelligence systems have the potential to increase political
legitimacy by identifying pressing societal issues, forecasting potential
policy outcomes, informing the policy process, and evaluating policy
effectiveness. This paper investigates how citizens perceptions of EU input,
throughput, and output legitimacy are influenced by three distinct
decision-making arrangements. First, independent human decision-making, HDM,
Second, independent algorithmic decision-making, ADM, and, third, hybrid
decision-making by EU politicians and AI-based systems together. The results of
a pre-registered online experiment with 572 respondents suggest that existing
EU decision-making arrangements are still perceived as the most democratic -
input legitimacy. However, regarding the decision-making process itself -
throughput legitimacy - and its policy outcomes - output legitimacy, no
difference was observed between the status quo and hybrid decision-making
involving both ADM and democratically elected EU institutions. Where ADM
systems are the sole decision-maker, respondents tend to perceive these as
illegitimate. The paper discusses the implications of these findings for EU
legitimacy and data-driven policy-making.
Related papers
- Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Explaining by Imitating: Understanding Decisions by Interpretable Policy
Learning [72.80902932543474]
Understanding human behavior from observed data is critical for transparency and accountability in decision-making.
Consider real-world settings such as healthcare, in which modeling a decision-maker's policy is challenging.
We propose a data-driven representation of decision-making behavior that inheres transparency by design, accommodates partial observability, and operates completely offline.
arXiv Detail & Related papers (2023-10-28T13:06:14Z) - Key Factors Affecting European Reactions to AI in European Full and
Flawed Democracies [1.104960878651584]
This study examines the key factors that affect European reactions to artificial intelligence (AI) in the context of full and flawed democracies in Europe.
It is observed that flawed democracies tend to exhibit higher levels of trust in government entities compared to their counterparts in full democracies.
Individuals residing in flawed democracies demonstrate a more positive attitude toward AI when compared to respondents from full democracies.
arXiv Detail & Related papers (2023-10-04T22:11:28Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - A Human-Centric Perspective on Fairness and Transparency in Algorithmic
Decision-Making [0.0]
Automated decision systems (ADS) are increasingly used for consequential decision-making.
Non-transparent systems are prone to yield unfair outcomes because their sanity is challenging to assess and calibrate.
I aim to make the following three main contributions through my doctoral thesis.
arXiv Detail & Related papers (2022-04-29T18:31:04Z) - Inverse Online Learning: Understanding Non-Stationary and Reactionary
Policies [79.60322329952453]
We show how to develop interpretable representations of how agents make decisions.
By understanding the decision-making processes underlying a set of observed trajectories, we cast the policy inference problem as the inverse to this online learning problem.
We introduce a practical algorithm for retrospectively estimating such perceived effects, alongside the process through which agents update them.
Through application to the analysis of UNOS organ donation acceptance decisions, we demonstrate that our approach can bring valuable insights into the factors that govern decision processes and how they change over time.
arXiv Detail & Related papers (2022-03-14T17:40:42Z) - Differential Privacy and Fairness in Decisions and Learning Tasks: A
Survey [50.90773979394264]
It reviews the conditions under which privacy and fairness may have aligned or contrasting goals.
It analyzes how and why DP may exacerbate bias and unfairness in decision problems and learning tasks.
arXiv Detail & Related papers (2022-02-16T16:50:23Z) - "A cold, technical decision-maker": Can AI provide explainability,
negotiability, and humanity? [47.36687555570123]
We present results of a qualitative study of algorithmic decision-making, comprised of five workshops conducted with a total of 60 participants.
We discuss participants' consideration of humanity in decision-making, and introduce the concept of 'negotiability,' the ability to go beyond formal criteria and work flexibly around the system.
arXiv Detail & Related papers (2020-12-01T22:36:54Z) - Contestable Black Boxes [10.552465253379134]
This paper investigates the type of assurances that are needed in the contesting process when algorithmic black-boxes are involved.
We argue that specialised complementary methodologies to evaluate automated decision-making in the case of a particular decision being contested need to be developed.
arXiv Detail & Related papers (2020-06-09T09:09:00Z) - Off-policy Policy Evaluation For Sequential Decisions Under Unobserved
Confounding [33.58862183373374]
We assess robustness of OPE methods under unobserved confounding.
We show that even small amounts of per-decision confounding can heavily bias OPE methods.
We propose an efficient loss-minimization-based procedure for computing worst-case bounds.
arXiv Detail & Related papers (2020-03-12T05:20:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.