Automatic Authorities: Power and AI
- URL: http://arxiv.org/abs/2404.05990v1
- Date: Tue, 9 Apr 2024 03:48:42 GMT
- Title: Automatic Authorities: Power and AI
- Authors: Seth Lazar,
- Abstract summary: Machine learning and related computational technologies now underpin vital government services.
They determine how we find out about everything from how to vote to where to get vaccinated.
A new wave of products based on Large Language Models (LLMs) will further transform our economic and political lives.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As rapid advances in Artificial Intelligence and the rise of some of history's most potent corporations meet the diminished neoliberal state, people are increasingly subject to power exercised by means of automated systems. Machine learning and related computational technologies now underpin vital government services. They connect consumers and producers in new algorithmic markets. They determine how we find out about everything from how to vote to where to get vaccinated, and whose speech is amplified, reduced, or restricted. And a new wave of products based on Large Language Models (LLMs) will further transform our economic and political lives. Automatic Authorities are automated computational systems used to exercise power over us by determining what we may know, what we may have, and what our options will be. In response to their rise, scholars working on the societal impacts of AI and related technologies have advocated shifting attention from how to make AI systems beneficial or fair towards a critical analysis of these new power relations. But power is everywhere, and is not necessarily bad. On what basis should we object to new or intensified power relations, and what can be done to justify them? This paper introduces the philosophical materials with which to formulate these questions, and offers preliminary answers. It starts by pinning down the concept of power, focusing on the ability that some agents have to shape others' lives. It then explores how AI enables and intensifies the exercise of power so understood, and sketches three problems with power and three ways to solve those problems. It emphasises, in particular, that justifying power requires more than satisfying substantive justificatory criteria; standards of proper authority and procedural legitimacy must also be met. We need to know not only what power may be used for, but how it may be used, and by whom.
Related papers
- A Community-driven vision for a new Knowledge Resource for AI [59.29703403953085]
Despite the success of knowledge resources like WordNet, verifiable, general-purpose widely available sources of knowledge remain a critical deficiency in AI infrastructure.<n>This paper synthesizes our findings and outlines a community-driven vision for a new knowledge infrastructure.
arXiv Detail & Related papers (2025-06-19T20:51:28Z) - Generative AI for Autonomous Driving: Frontiers and Opportunities [145.6465312554513]
This survey delivers a comprehensive synthesis of the emerging role of GenAI across the autonomous driving stack.<n>We begin by distilling the principles and trade-offs of modern generative modeling, encompassing VAEs, GANs, Diffusion Models, and Large Language Models.<n>We categorize practical applications, such as synthetic data generalization, end-to-end driving strategies, high-fidelity digital twin systems, smart transportation networks, and cross-domain transfer to embodied AI.
arXiv Detail & Related papers (2025-05-13T17:59:20Z) - Explainable AI the Latest Advancements and New Trends [0.0]
The concept of trustworthiness is cross-disciplinary; it must meet societal standards and principles.<n>We elaborate on the strong link between the explainability of AI and the meta-reasoning of autonomous systems.<n>The integration of approaches could pave the way for future interpretable AI systems.
arXiv Detail & Related papers (2025-05-11T15:01:12Z) - Agency in Artificial Intelligence Systems [0.0]
There is a general concern that present developments in artificial intelligence (AI) research will lead to sentient AI systems.
But why cannot sentient AI systems benefit humanity instead?
I ask whether a putative AI system will develop an altruistic or a malicious disposition towards our society, or what would be the nature of its agency.
arXiv Detail & Related papers (2025-02-09T02:21:14Z) - Shaping AI's Impact on Billions of Lives [27.78474296888659]
We argue for the community of AI practitioners to consciously and proactively work for the common good.
This paper offers a blueprint for a new type of innovation infrastructure.
arXiv Detail & Related papers (2024-12-03T16:29:37Z) - Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - Hype, Sustainability, and the Price of the Bigger-is-Better Paradigm in AI [67.58673784790375]
We argue that the 'bigger is better' AI paradigm is not only fragile scientifically, but comes with undesirable consequences.
First, it is not sustainable, as its compute demands increase faster than model performance, leading to unreasonable economic requirements and a disproportionate environmental footprint.
Second, it implies focusing on certain problems at the expense of others, leaving aside important applications, e.g. health, education, or the climate.
arXiv Detail & Related papers (2024-09-21T14:43:54Z) - Position Paper: Agent AI Towards a Holistic Intelligence [53.35971598180146]
We emphasize developing Agent AI -- an embodied system that integrates large foundation models into agent actions.
In this paper, we propose a novel large action model to achieve embodied intelligent behavior, the Agent Foundation Model.
arXiv Detail & Related papers (2024-02-28T16:09:56Z) - Trust, Accountability, and Autonomy in Knowledge Graph-based AI for
Self-determination [1.4305544869388402]
Knowledge Graphs (KGs) have emerged as fundamental platforms for powering intelligent decision-making.
The integration of KGs with neuronal learning is currently a topic of active research.
This paper conceptualises the foundational topics and research pillars to support KG-based AI for self-determination.
arXiv Detail & Related papers (2023-10-30T12:51:52Z) - AI Explainability and Governance in Smart Energy Systems: A Review [0.36832029288386137]
Lack of explainability and governability of AI is a major concern for stakeholders.
This paper provides a review of AI explainability and governance in smart energy systems.
arXiv Detail & Related papers (2022-10-24T05:09:13Z) - Intelligent Decision Assistance Versus Automated Decision-Making:
Enhancing Knowledge Work Through Explainable Artificial Intelligence [0.0]
We propose a new class of DSS, namely Intelligent Decision Assistance (IDA)
IDA supports knowledge workers without influencing them through automated decision-making.
Specifically, we propose to use techniques of Explainable AI (XAI) while withholding concrete AI recommendations.
arXiv Detail & Related papers (2021-09-28T15:57:21Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - AI Ethics Needs Good Data [0.8701566919381224]
We argue that discourse on AI must transcend the language of 'ethics' and engage with power and political economy.
We offer four 'economys' on which Good Data AI can be built: community, rights, usability and politics.
arXiv Detail & Related papers (2021-02-15T04:16:27Z) - Reasonable Machines: A Research Manifesto [0.0]
A sound ecosystem of trust requires ways for autonomously justify their actions.
Building on social reasoning models such as moral and legal philosophy.
Enabling normative communication creates trust and opens new dimensions of AI application.
arXiv Detail & Related papers (2020-08-14T08:51:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.