Legitimate Power, Illegitimate Automation: The problem of ignoring legitimacy in automated decision systems
- URL: http://arxiv.org/abs/2404.15680v1
- Date: Wed, 24 Apr 2024 06:29:54 GMT
- Title: Legitimate Power, Illegitimate Automation: The problem of ignoring legitimacy in automated decision systems
- Authors: Jake Stone, Brent Mittelstadt,
- Abstract summary: Machine learning and artificial intelligence have spurred the widespread adoption of automated decision systems (ADS)
This paper shows that theorists often incorrectly conflate legitimacy with either public acceptance or other substantive values such as fairness, accuracy, expertise or efficiency.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Progress in machine learning and artificial intelligence has spurred the widespread adoption of automated decision systems (ADS). An extensive literature explores what conditions must be met for these systems' decisions to be fair. However, questions of legitimacy -- why those in control of ADS are entitled to make such decisions -- have received comparatively little attention. This paper shows that when such questions are raised theorists often incorrectly conflate legitimacy with either public acceptance or other substantive values such as fairness, accuracy, expertise or efficiency. In search of better theories, we conduct a critical analysis of the philosophical literature on the legitimacy of the state, focusing on consent, public reason, and democratic authorisation. This analysis reveals that the prevailing understanding of legitimacy in analytical political philosophy is also ill-suited to the task of establishing whether and when ADS are legitimate. The paper thus clarifies expectations for theories of ADS legitimacy and charts a path for a future research programme on the topic.
Related papers
- Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - Trustworthy human-centric based Automated Decision-Making Systems [0.7048747239308888]
Automated Decision-Making Systems (ADS) have become pervasive across various fields, activities, and occupations, to enhance performance.
This research paper presents a thorough examination of the implications, distinctions, and ethical considerations associated with digitalization, digital transformation, and the utilization of ADS in contemporary society and future contexts.
arXiv Detail & Related papers (2023-12-22T11:02:57Z) - AI Ethics and Ordoliberalism 2.0: Towards A 'Digital Bill of Rights' [0.0]
This article analyzes AI ethics from a distinct business ethics perspective, i.e., 'ordoliberalism 2.0'
It argues that the ongoing discourse on (generative) AI relies too much on corporate self-regulation and voluntary codes of conduct.
The paper suggests merging already existing AI guidelines with an ordoliberal-inspired regulatory and competition policy.
arXiv Detail & Related papers (2023-10-27T10:26:12Z) - A Critical Examination of the Ethics of AI-Mediated Peer Review [0.0]
Recent advancements in artificial intelligence (AI) systems offer promise and peril for scholarly peer review.
Human peer review systems are also fraught with related problems, such as biases, abuses, and a lack of transparency.
The legitimacy of AI-driven peer review hinges on the alignment with the scientific ethos.
arXiv Detail & Related papers (2023-09-02T18:14:10Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - Metaethical Perspectives on 'Benchmarking' AI Ethics [81.65697003067841]
Benchmarks are seen as the cornerstone for measuring technical progress in Artificial Intelligence (AI) research.
An increasingly prominent research area in AI is ethics, which currently has no set of benchmarks nor commonly accepted way for measuring the 'ethicality' of an AI system.
We argue that it makes more sense to talk about 'values' rather than 'ethics' when considering the possible actions of present and future AI systems.
arXiv Detail & Related papers (2022-04-11T14:36:39Z) - Compliance Challenges in Forensic Image Analysis Under the Artificial
Intelligence Act [8.890638003061605]
We review why the use of machine learning in forensic image analysis is classified as high-risk.
Under the draft AI act, high-risk AI systems for use in law enforcement are permitted but subject to compliance with mandatory requirements.
arXiv Detail & Related papers (2022-03-01T14:03:23Z) - Appropriate Fairness Perceptions? On the Effectiveness of Explanations
in Enabling People to Assess the Fairness of Automated Decision Systems [0.0]
We argue that for an effective explanation, perceptions of fairness should increase if and only if the underlying ADS is fair.
In this in-progress work, we introduce the desideratum of appropriate fairness perceptions, propose a novel study design for evaluating it, and outline next steps towards a comprehensive experiment.
arXiv Detail & Related papers (2021-08-14T09:39:59Z) - On Consequentialism and Fairness [64.35872952140677]
We provide a consequentialist critique of common definitions of fairness within machine learning.
We conclude with a broader discussion of the issues of learning and randomization.
arXiv Detail & Related papers (2020-01-02T05:39:48Z) - Learning from Learning Machines: Optimisation, Rules, and Social Norms [91.3755431537592]
It appears that the area of AI that is most analogous to the behaviour of economic entities is that of morally good decision-making.
Recent successes of deep learning for AI suggest that more implicit specifications work better than explicit ones for solving such problems.
arXiv Detail & Related papers (2019-12-29T17:42:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.