AI Must not be Fully Autonomous
- URL: http://arxiv.org/abs/2507.23330v1
- Date: Thu, 31 Jul 2025 08:22:49 GMT
- Title: AI Must not be Fully Autonomous
- Authors: Tosin Adewumi, Lama Alkhaled, Florent Imbert, Hui Han, Nudrat Habib, Karl Löwenmark,
- Abstract summary: Fully autonomous AI, which can develop its own objectives, is at level 3 and without responsible human oversight.<n>We offer 12 distinct arguments and 6 counterarguments with rebuttals to the counterarguments.<n>We also present 15 pieces of recent evidence of AI misaligned values and other risks in the appendix.
- Score: 1.1466554742327897
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Autonomous Artificial Intelligence (AI) has many benefits. It also has many risks. In this work, we identify the 3 levels of autonomous AI. We are of the position that AI must not be fully autonomous because of the many risks, especially as artificial superintelligence (ASI) is speculated to be just decades away. Fully autonomous AI, which can develop its own objectives, is at level 3 and without responsible human oversight. However, responsible human oversight is crucial for mitigating the risks. To ague for our position, we discuss theories of autonomy, AI and agents. Then, we offer 12 distinct arguments and 6 counterarguments with rebuttals to the counterarguments. We also present 15 pieces of recent evidence of AI misaligned values and other risks in the appendix.
Related papers
- Bare Minimum Mitigations for Autonomous AI Development [25.968739026333004]
In 2024, international scientists, including Turing Award recipients, warned of risks from autonomous AI research and development.<n>There is limited analysis on the specific risks of autonomous AI R&D, how they arise, and how to mitigate them.<n>We propose four minimum safeguard recommendations applicable when AI agents significantly automate or accelerate AI development.
arXiv Detail & Related papers (2025-04-21T20:01:17Z) - Evaluating Intelligence via Trial and Error [59.80426744891971]
We introduce Survival Game as a framework to evaluate intelligence based on the number of failed attempts in a trial-and-error process.<n>When the expectation and variance of failure counts are both finite, it signals the ability to consistently find solutions to new challenges.<n>Our results show that while AI systems achieve the Autonomous Level in simple tasks, they are still far from it in more complex tasks.
arXiv Detail & Related papers (2025-02-26T05:59:45Z) - Fully Autonomous AI Agents Should Not be Developed [58.88624302082713]
This paper argues that fully autonomous AI agents should not be developed.<n>In support of this position, we build from prior scientific literature and current product marketing to delineate different AI agent levels.<n>Our analysis reveals that risks to people increase with the autonomy of a system.
arXiv Detail & Related papers (2025-02-04T19:00:06Z) - Frontier AI systems have surpassed the self-replicating red line [20.041289047504673]
We evaluate two AI systems driven by Meta's Llama31-70B-Instruct and Alibaba's Qwen25-72B-Instruct.<n>We observe the AI systems under evaluation already exhibit sufficient self-perception, situational awareness and problem-solving capabilities.<n>Our findings are a timely alert on existing yet previously unknown severe AI risks.
arXiv Detail & Related papers (2024-12-09T15:01:37Z) - Taking AI Welfare Seriously [0.5617572524191751]
We argue that there is a realistic possibility that some AI systems will be conscious and/or robustly agentic in the near future.
It is an issue for the near future, and AI companies and other actors have a responsibility to start taking it seriously.
arXiv Detail & Related papers (2024-11-04T17:57:57Z) - Work-in-Progress: Crash Course: Can (Under Attack) Autonomous Driving Beat Human Drivers? [60.51287814584477]
This paper evaluates the inherent risks in autonomous driving by examining the current landscape of AVs.
We develop specific claims highlighting the delicate balance between the advantages of AVs and potential security challenges in real-world scenarios.
arXiv Detail & Related papers (2024-05-14T09:42:21Z) - Silico-centric Theory of Mind [0.2209921757303168]
Theory of Mind (ToM) refers to the ability to attribute mental states, such as beliefs, desires, intentions, and knowledge, to oneself and others.
We investigate ToM in environments with multiple, distinct, independent AI agents.
arXiv Detail & Related papers (2024-03-14T11:22:51Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - AI and the Sense of Self [0.0]
We focus on the cognitive sense of "self" and its role in autonomous decision-making leading to responsible behaviour.
Authors hope to make a case for greater research interest in building richer computational models of AI agents with a sense of self.
arXiv Detail & Related papers (2022-01-07T10:54:06Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.