Artificial Intelligence and Arms Control
- URL: http://arxiv.org/abs/2211.00065v1
- Date: Sat, 22 Oct 2022 16:09:41 GMT
- Title: Artificial Intelligence and Arms Control
- Authors: Paul Scharre and Megan Lamberth
- Abstract summary: The idea of AI-enabled military systems has motivated some activists to call for restrictions or bans on some weapon systems.
This paper argues that while a ban on all military applications of AI is likely infeasible, there may be specific cases where arms control is possible.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Potential advancements in artificial intelligence (AI) could have profound
implications for how countries research and develop weapons systems, and how
militaries deploy those systems on the battlefield. The idea of AI-enabled
military systems has motivated some activists to call for restrictions or bans
on some weapon systems, while others have argued that AI may be too diffuse to
control. This paper argues that while a ban on all military applications of AI
is likely infeasible, there may be specific cases where arms control is
possible. Throughout history, the international community has attempted to ban
or regulate weapons or military systems for a variety of reasons. This paper
analyzes both successes and failures and offers several criteria that seem to
influence why arms control works in some cases and not others. We argue that
success or failure depends on the desirability (i.e., a weapon's military value
versus its perceived horribleness) and feasibility (i.e., sociopolitical
factors that influence its success) of arms control. Based on these criteria,
and the historical record of past attempts at arms control, we analyze the
potential for AI arms control in the future and offer recommendations for what
policymakers can do today.
Related papers
- Towards evaluations-based safety cases for AI scheming [37.399946932069746]
We propose three arguments that safety cases could use in relation to scheming.
First, developers of frontier AI systems could argue that AI systems are not capable of scheming.
Second, one could argue that AI systems are not capable of posing harm through scheming.
Third, one could argue that control measures around the AI systems would prevent unacceptable outcomes even if the AI systems intentionally attempted to subvert them.
arXiv Detail & Related papers (2024-10-29T17:55:29Z) - Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - AI-Powered Autonomous Weapons Risk Geopolitical Instability and Threaten AI Research [6.96356867602455]
We argue that the recent embrace of machine learning in the development of autonomous weapons systems (AWS) creates serious risks to geopolitical stability and the free exchange of ideas in AI research.
ML is already enabling the substitution of AWS for human soldiers in many battlefield roles, reducing the upfront human cost, and thus political cost, of waging offensive war.
Further, the military value of AWS raises the specter of an AI-powered arms race and the misguided imposition of national security restrictions on AI research.
arXiv Detail & Related papers (2024-05-03T05:19:45Z) - A Technological Perspective on Misuse of Available AI [41.94295877935867]
Potential malicious misuse of civilian artificial intelligence (AI) poses serious threats to security on a national and international level.
We show how already existing and openly available AI technology could be misused.
We develop three exemplary use cases of potentially misused AI that threaten political, digital and physical security.
arXiv Detail & Related papers (2024-03-22T16:30:58Z) - A Call to Arms: AI Should be Critical for Social Media Analysis of
Conflict Zones [5.479613761646247]
This paper presents preliminary, transdisciplinary work using computer vision to identify specific weapon systems and the insignias of the armed groups using them.
There is potential to not only track how weapons are distributed through networks of armed units but also to track which types of weapons are being used by the different types of state and non-state military actors in Ukraine.
Such a system could ultimately be used to understand conflicts in real-time, including where humanitarian and medical aid is most needed.
arXiv Detail & Related papers (2023-11-01T19:49:32Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - On Controllability of AI [1.370633147306388]
We present arguments as well as supporting evidence indicating that advanced AI can't be fully controlled.
Consequences of uncontrollability of AI are discussed with respect to future of humanity and research on AI, and AI safety and security.
arXiv Detail & Related papers (2020-07-19T02:49:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.