Examining Popular Arguments Against AI Existential Risk: A Philosophical Analysis
- URL: http://arxiv.org/abs/2501.04064v1
- Date: Tue, 07 Jan 2025 11:15:26 GMT
- Title: Examining Popular Arguments Against AI Existential Risk: A Philosophical Analysis
- Authors: Torben Swoboda, Risto Uuk, Lode Lauwaert, Andrew P. Rebera, Ann-Katrien Oimann, Bartlomiej Chomanski, Carina Prunkl,
- Abstract summary: Despite extensive media coverage, skepticism toward the existential risk discourse has received limited rigorous treatment in academic literature.<n>This paper reconstructs and evaluates three common arguments against the existential risk perspective.<n>It aims to provide a foundation for more balanced academic discourse and further research on AI.
- Score: 0.6831861881190009
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Concerns about artificial intelligence (AI) and its potential existential risks have garnered significant attention, with figures like Geoffrey Hinton and Dennis Hassabis advocating for robust safeguards against catastrophic outcomes. Prominent scholars, such as Nick Bostrom and Max Tegmark, have further advanced the discourse by exploring the long-term impacts of superintelligent AI. However, this existential risk narrative faces criticism, particularly in popular media, where scholars like Timnit Gebru, Melanie Mitchell, and Nick Clegg argue, among other things, that it distracts from pressing current issues. Despite extensive media coverage, skepticism toward the existential risk discourse has received limited rigorous treatment in academic literature. Addressing this imbalance, this paper reconstructs and evaluates three common arguments against the existential risk perspective: the Distraction Argument, the Argument from Human Frailty, and the Checkpoints for Intervention Argument. By systematically reconstructing and assessing these arguments, the paper aims to provide a foundation for more balanced academic discourse and further research on AI.
Related papers
- Insidious Imaginaries: A Critical Overview of AI Speculations [0.0]
Speculative thinking about the capabilities and implications of artificial intelligence (AI) influences computer science research, drives AI industry practices, feeds academic studies of existential hazards, and stirs a global political debate.<n>It permeates technophilic philosophies and social movements, fuels the corporate and pundit rhetoric, and remains a potent source of inspiration for the media, popular culture, and arts.<n>This paper offers a critical overview of AI speculations. In three central sections, it traces the intertwined sway of science fiction, religiosity, intellectual charlatanism, dubious academic research, suspicious entrepreneurship, and ominous sociopolitical worldviews that make AI speculations troublesome
arXiv Detail & Related papers (2026-02-19T14:08:57Z) - SAD: A Large-Scale Strategic Argumentative Dialogue Dataset [60.33125467375306]
In practice, argumentation is often realized as multi-turn dialogue.<n>We present the first large-scale textbfStrategic textbfArgumentative textbfDialogue dataset, consisting of 392,822 examples.
arXiv Detail & Related papers (2026-01-12T11:11:37Z) - Decoding Human and AI Persuasion in National College Debate: Analyzing Prepared Arguments Through Aristotle's Rhetorical Principles [9.91280795515591]
This study explores the potential of leveraging artificial intelligence to generate effective arguments.<n>The evidence cards outline the arguments students will present and how those arguments will be delivered.<n>We compared the quality of the arguments in the evidence cards created by GPT and student debaters using Aristotle's rhetorical principles.
arXiv Detail & Related papers (2025-12-14T19:46:16Z) - The Provenance Problem: LLMs and the Breakdown of Citation Norms [0.0]
The increasing use of generative AI in scientific writing raises urgent questions about attribution and intellectual credit.<n>We argue that such cases exemplify the 'provenance problem': a systematic breakdown in the chain of scholarly credit.<n>This Perspective analyzes how AI challenges established norms of authorship, introduces conceptual tools for understanding the problem provenance, and proposes strategies to preserve integrity and fairness in scholarly communication.
arXiv Detail & Related papers (2025-09-15T18:01:03Z) - Advancing Science- and Evidence-based AI Policy [163.43609502905707]
This paper tackles the problem of how to optimize the relationship between evidence and policy to address the opportunities and challenges of AI.<n>An increasing number of efforts address this problem by often either (i) contributing research into the risks of AI and their effective mitigation or (ii) advocating for policy to address these risks.
arXiv Detail & Related papers (2025-08-02T23:20:58Z) - Persuasion and Safety in the Era of Generative AI [0.0]
The EU AI Act prohibits AI systems that use manipulative or deceptive techniques to undermine informed decision-making.<n>My dissertation addresses the lack of empirical studies in this area by developing a taxonomy of persuasive techniques.<n>It provides resources to mitigate the risks of persuasive AI and fosters discussions on ethical persuasion in the age of generative AI.
arXiv Detail & Related papers (2025-05-18T06:04:46Z) - Must Read: A Systematic Survey of Computational Persuasion [60.83151988635103]
AI-driven persuasion can be leveraged for beneficial applications, but also poses threats through manipulation and unethical influence.<n>Our survey outlines future research directions to enhance the safety, fairness, and effectiveness of AI-powered persuasion.
arXiv Detail & Related papers (2025-05-12T17:26:31Z) - Risks and NLP Design: A Case Study on Procedural Document QA [52.557503571760215]
We argue that clearer assessments of risks and harms to users will be possible when we specialize the analysis to more concrete applications and their plausible users.
We conduct a risk-oriented error analysis that could then inform the design of a future system to be deployed with lower risk of harm and better performance.
arXiv Detail & Related papers (2024-08-16T17:23:43Z) - Generative AI and the problem of existential risk [0.0]
Generative AI has been a focal point for concerns about AI's perceived existential risk.
This chapter aims to demystify the debate by highlighting the key worries that underpin existential risk fears in relation to generative AI.
arXiv Detail & Related papers (2024-07-18T10:16:24Z) - Argument Quality Assessment in the Age of Instruction-Following Large Language Models [45.832808321166844]
A critical task in any such application is the assessment of an argument's quality.
We identify the diversity of quality notions and the subjectiveness of their perception as the main hurdles towards substantial progress on argument quality assessment.
We argue that the capabilities of instruction-following large language models (LLMs) to leverage knowledge across contexts enable a much more reliable assessment.
arXiv Detail & Related papers (2024-03-24T10:43:21Z) - Mapping the Ethics of Generative AI: A Comprehensive Scoping Review [0.0]
We conduct a scoping review on the ethics of generative artificial intelligence, including especially large language models and text-to-image models.
Our analysis provides a taxonomy of 378 normative issues in 19 topic areas and ranks them according to their prevalence in the literature.
The study offers a comprehensive overview for scholars, practitioners, or policymakers, condensing the ethical debates surrounding fairness, safety, harmful content, hallucinations, privacy, interaction risks, security, alignment, societal impacts, and others.
arXiv Detail & Related papers (2024-02-13T09:38:17Z) - Unmasking the Shadows of AI: Investigating Deceptive Capabilities in Large Language Models [0.0]
This research critically navigates the intricate landscape of AI deception, concentrating on deceptive behaviours of Large Language Models (LLMs)
My objective is to elucidate this issue, examine the discourse surrounding it, and subsequently delve into its categorization and ramifications.
arXiv Detail & Related papers (2024-02-07T00:21:46Z) - A Unifying Framework for Learning Argumentation Semantics [50.69905074548764]
We present a novel framework, which uses an Inductive Logic Programming approach to learn the acceptability semantics for several abstract and structured argumentation frameworks in an interpretable way.
Our framework outperforms existing argumentation solvers, thus opening up new future research directions in the area of formal argumentation and human-machine dialogues.
arXiv Detail & Related papers (2023-10-18T20:18:05Z) - Regulation and NLP (RegNLP): Taming Large Language Models [51.41095330188972]
We argue how NLP research can benefit from proximity to regulatory studies and adjacent fields.
We advocate for the development of a new multidisciplinary research space on regulation and NLP.
arXiv Detail & Related papers (2023-10-09T09:22:40Z) - Identifying and Mitigating the Security Risks of Generative AI [179.2384121957896]
This paper reports the findings of a workshop held at Google on the dual-use dilemma posed by GenAI.
GenAI can be used just as well by attackers to generate new attacks and increase the velocity and efficacy of existing attacks.
We discuss short-term and long-term goals for the community on this topic.
arXiv Detail & Related papers (2023-08-28T18:51:09Z) - Towards CausalGPT: A Multi-Agent Approach for Faithful Knowledge Reasoning via Promoting Causal Consistency in LLMs [60.244412212130264]
Causal-Consistency Chain-of-Thought harnesses multi-agent collaboration to bolster the faithfulness and causality of foundation models.
Our framework demonstrates significant superiority over state-of-the-art methods through extensive and comprehensive evaluations.
arXiv Detail & Related papers (2023-08-23T04:59:21Z) - AI Risk Skepticism, A Comprehensive Survey [1.370633147306388]
The study takes into account different points of view on the topic and draws parallels with other forms of skepticism that have shown up in science.
We categorize the various skepticisms regarding the dangers of AI by the type of mistaken thinking involved.
arXiv Detail & Related papers (2023-02-16T16:32:38Z) - The Role of AI in Drug Discovery: Challenges, Opportunities, and
Strategies [97.5153823429076]
The benefits, challenges and drawbacks of AI in this field are reviewed.
The use of data augmentation, explainable AI, and the integration of AI with traditional experimental methods are also discussed.
arXiv Detail & Related papers (2022-12-08T23:23:39Z) - Arguments about Highly Reliable Agent Designs as a Useful Path to
Artificial Intelligence Safety [0.0]
Highly Reliable Agent Designs (HRAD) is one of the most controversial and ambitious approaches.
We have titled the arguments (1) incidental utility, (2) deconfusion, (3) precise specification, and (4) prediction.
We have explained the assumptions and claims based on a review of published and informal literature, along with experts who have stated positions on the topic.
arXiv Detail & Related papers (2022-01-09T07:42:37Z) - What Changed Your Mind: The Roles of Dynamic Topics and Discourse in
Argumentation Process [78.4766663287415]
This paper presents a study that automatically analyzes the key factors in argument persuasiveness.
We propose a novel neural model that is able to track the changes of latent topics and discourse in argumentative conversations.
arXiv Detail & Related papers (2020-02-10T04:27:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.