Legitimacy, Authority, and Democratic Duties of Explanation
- URL: http://arxiv.org/abs/2208.08628v4
- Date: Wed, 11 Oct 2023 12:06:30 GMT
- Title: Legitimacy, Authority, and Democratic Duties of Explanation
- Authors: Seth Lazar
- Abstract summary: Secret, complex and inscrutable computational systems are being used to intensify existing power relations.
This paper first applies it to opaque computational systems, and clarifies precisely what kinds of explanations are necessary to fulfil democratic values.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Increasingly secret, complex and inscrutable computational systems are being
used to intensify existing power relations and to create new ones; in
particular, they are being used to govern. To be all-things-considered morally
permissible new, or newly intense, power relations must meet standards of
procedural legitimacy and proper authority. This is necessary for them to
protect and realise democratic values of individual liberty, relational
equality, and collective self-determination. For governing power in particular
to be legitimate and have proper authority, it must meet a publicity
requirement: reasonably competent members of the governed community must be
able to determine that they are being governed legitimately and with proper
authority. The publicity requirement can be satisfied only if the powerful can
explain their decision-making to members of their political community. At least
some duties of explanation are therefore democratic duties. This paper first
sets out the foregoing argument, then applies it to opaque computational
systems, and clarifies precisely what kinds of explanations are necessary to
fulfil these democratic values.
Related papers
- Can LLMs advance democratic values? [0.0]
We argue that LLMs should be kept well clear of formal democratic decision-making processes.
They can be put to good use in strengthening the informal public sphere.
arXiv Detail & Related papers (2024-10-10T23:24:06Z) - From Experts to the Public: Governing Multimodal Language Models in Politically Sensitive Video Analysis [48.14390493099495]
This paper examines the governance of large language models (MM-LLMs) through individual and collective deliberation.
We conducted a two-step study: first, interviews with 10 journalists established a baseline understanding of expert video interpretation; second, 114 individuals from the general public engaged in deliberation using Inclusive.AI.
arXiv Detail & Related papers (2024-09-15T03:17:38Z) - Public Constitutional AI [0.0]
We are increasingly subjected to the power of AI authorities.
How can we ensure AI systems have the legitimacy necessary for effective governance?
This essay argues that to secure AI legitimacy, we need methods that engage the public in designing and constraining AI systems.
arXiv Detail & Related papers (2024-06-24T15:00:01Z) - Automatic Authorities: Power and AI [0.0]
Machine learning and related computational technologies now underpin vital government services.
They determine how we find out about everything from how to vote to where to get vaccinated.
A new wave of products based on Large Language Models (LLMs) will further transform our economic and political lives.
arXiv Detail & Related papers (2024-04-09T03:48:42Z) - Value Kaleidoscope: Engaging AI with Pluralistic Human Values, Rights, and Duties [68.66719970507273]
Value pluralism is the view that multiple correct values may be held in tension with one another.
As statistical learners, AI systems fit to averages by default, washing out potentially irreducible value conflicts.
We introduce ValuePrism, a large-scale dataset of 218k values, rights, and duties connected to 31k human-written situations.
arXiv Detail & Related papers (2023-09-02T01:24:59Z) - Causal Fairness for Outcome Control [68.12191782657437]
We study a specific decision-making task called outcome control in which an automated system aims to optimize an outcome variable $Y$ while being fair and equitable.
In this paper, we first analyze through causal lenses the notion of benefit, which captures how much a specific individual would benefit from a positive decision.
We then note that the benefit itself may be influenced by the protected attribute, and propose causal tools which can be used to analyze this.
arXiv Detail & Related papers (2023-06-08T09:31:18Z) - Explanatory Publics: Explainability and Democratic Thought [0.0]
I am gesturing to the need for frameworks of knowledge to be justified through a social right to explanation.
For a polity to be considered democratic, it must ensure that its citizens are able to develop a capacity for explanatory thought.
This is to extend the notion of a public sphere where citizens are able to question ideas, practices, and institutions in society more generally.
arXiv Detail & Related papers (2023-04-04T20:25:13Z) - Users are the North Star for AI Transparency [111.5679109784322]
Despite widespread calls for transparent artificial intelligence systems, the term is too overburdened with disparate meanings to express precise policy aims or to orient concrete lines of research.
Part of why this happens is that a clear ideal of AI transparency goes unsaid in this body of work.
We explicitly name such a north star -- transparency that is user-centered, user-appropriate, and honest.
arXiv Detail & Related papers (2023-03-09T18:53:29Z) - Interpretable Reinforcement Learning with Multilevel Subgoal Discovery [77.34726150561087]
We propose a novel Reinforcement Learning model for discrete environments.
In the model, an agent learns information about environment in the form of probabilistic rules.
No reward function is required for learning; an agent only needs to be given a primary goal to achieve.
arXiv Detail & Related papers (2022-02-15T14:04:44Z) - Governing online goods: Maturity and formalization in Minecraft, Reddit,
and World of Warcraft communities [0.0]
Building a successful community means governing active populations and limited resources.
This study applies institutional analysis frameworks to 80,000 communities across 3 platforms: the sandbox game Minecraft, the MMO game World of Warcraft, and Reddit.
We find that online communities employ similar governance styles across platforms, strongly favoring "weak" norms to "strong" requirements.
arXiv Detail & Related papers (2022-02-02T22:45:21Z) - Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and
Goals of Human Trust in AI [55.4046755826066]
We discuss a model of trust inspired by, but not identical to, sociology's interpersonal trust (i.e., trust between people)
We incorporate a formalization of 'contractual trust', such that trust between a user and an AI is trust that some implicit or explicit contract will hold.
We discuss how to design trustworthy AI, how to evaluate whether trust has manifested, and whether it is warranted.
arXiv Detail & Related papers (2020-10-15T03:07:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.