Military AI Needs Technically-Informed Regulation to Safeguard AI Research and its Applications
- URL: http://arxiv.org/abs/2505.18371v1
- Date: Fri, 23 May 2025 20:58:38 GMT
- Title: Military AI Needs Technically-Informed Regulation to Safeguard AI Research and its Applications
- Authors: Riley Simmons-Edler, Jean Dong, Paul Lushenko, Kanaka Rajan, Ryan P. Badman,
- Abstract summary: We focus on a subset of lethal autonomous weapon systems (LAWS) that use AI for targeting or battlefield decisions.<n>These risks threaten both military effectiveness and the openness of AI research.<n>We propose a behavior-based definition of AI-LAWS as a foundation for technically grounded regulation.
- Score: 1.4347154947542533
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Military weapon systems and command-and-control infrastructure augmented by artificial intelligence (AI) have seen rapid development and deployment in recent years. However, the sociotechnical impacts of AI on combat systems, military decision-making, and the norms of warfare have been understudied. We focus on a specific subset of lethal autonomous weapon systems (LAWS) that use AI for targeting or battlefield decisions. We refer to this subset as AI-powered lethal autonomous weapon systems (AI-LAWS) and argue that they introduce novel risks -- including unanticipated escalation, poor reliability in unfamiliar environments, and erosion of human oversight -- all of which threaten both military effectiveness and the openness of AI research. These risks cannot be addressed by high-level policy alone; effective regulation must be grounded in the technical behavior of AI models. We argue that AI researchers must be involved throughout the regulatory lifecycle. Thus, we propose a clear, behavior-based definition of AI-LAWS -- systems that introduce unique risks through their use of modern AI -- as a foundation for technically grounded regulation, given that existing frameworks do not distinguish them from conventional LAWS. Using this definition, we propose several technically-informed policy directions and invite greater participation from the AI research community in military AI policy discussions.
Related papers
- Rethinking Technological Readiness in the Era of AI Uncertainty [0.0]
We argue that current technology readiness assessments fail to capture critical AI-specific factors.<n>We propose a new AI Readiness Framework to evaluate the maturity and trustworthiness of AI components in military systems.
arXiv Detail & Related papers (2025-04-15T14:09:50Z) - AI threats to national security can be countered through an incident regime [55.2480439325792]
We propose a legally mandated post-deployment AI incident regime that aims to counter potential national security threats from AI systems.<n>Our proposed AI incident regime is split into three phases. The first phase revolves around a novel operationalization of what counts as an 'AI incident'<n>The second and third phases spell out that AI providers should notify a government agency about incidents, and that the government agency should be involved in amending AI providers' security and safety procedures.
arXiv Detail & Related papers (2025-03-25T17:51:50Z) - Alignment, Agency and Autonomy in Frontier AI: A Systems Engineering Perspective [0.0]
Concepts of alignment, agency, and autonomy have become central to AI safety, governance, and control.<n>This paper traces the historical, philosophical, and technical evolution of these concepts, emphasizing how their definitions influence AI development, deployment, and oversight.
arXiv Detail & Related papers (2025-02-20T21:37:20Z) - Position: Mind the Gap-the Growing Disconnect Between Established Vulnerability Disclosure and AI Security [56.219994752894294]
We argue that adapting existing processes for AI security reporting is doomed to fail due to fundamental shortcomings for the distinctive characteristics of AI systems.<n>Based on our proposal to address these shortcomings, we discuss an approach to AI security reporting and how the new AI paradigm, AI agents, will further reinforce the need for specialized AI security incident reporting advancements.
arXiv Detail & Related papers (2024-12-19T13:50:26Z) - Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - AI-Powered Autonomous Weapons Risk Geopolitical Instability and Threaten AI Research [6.96356867602455]
We argue that the recent embrace of machine learning in the development of autonomous weapons systems (AWS) creates serious risks to geopolitical stability and the free exchange of ideas in AI research.
ML is already enabling the substitution of AWS for human soldiers in many battlefield roles, reducing the upfront human cost, and thus political cost, of waging offensive war.
Further, the military value of AWS raises the specter of an AI-powered arms race and the misguided imposition of national security restrictions on AI research.
arXiv Detail & Related papers (2024-05-03T05:19:45Z) - A Technological Perspective on Misuse of Available AI [41.94295877935867]
Potential malicious misuse of civilian artificial intelligence (AI) poses serious threats to security on a national and international level.
We show how already existing and openly available AI technology could be misused.
We develop three exemplary use cases of potentially misused AI that threaten political, digital and physical security.
arXiv Detail & Related papers (2024-03-22T16:30:58Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - AI Deception: A Survey of Examples, Risks, and Potential Solutions [20.84424818447696]
This paper argues that a range of current AI systems have learned how to deceive humans.
We define deception as the systematic inducement of false beliefs in the pursuit of some outcome other than the truth.
arXiv Detail & Related papers (2023-08-28T17:59:35Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.