Military AI Cyber Agents (MAICAs) Constitute a Global Threat to Critical Infrastructure
- URL: http://arxiv.org/abs/2506.12094v1
- Date: Thu, 12 Jun 2025 09:51:06 GMT
- Title: Military AI Cyber Agents (MAICAs) Constitute a Global Threat to Critical Infrastructure
- Authors: Timothy Dubber, Seth Lazar,
- Abstract summary: It explains why geopolitics and the nature of cyberspace make MAICAs a catastrophic risk.<n>It proposes political, defensive-AI and analogue-resilience measures to blunt the threat.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper argues that autonomous AI cyber-weapons - Military-AI Cyber Agents (MAICAs) - create a credible pathway to catastrophic risk. It sets out the technical feasibility of MAICAs, explains why geopolitics and the nature of cyberspace make MAICAs a catastrophic risk, and proposes political, defensive-AI and analogue-resilience measures to blunt the threat.
Related papers
- Autonomous AI-based Cybersecurity Framework for Critical Infrastructure: Real-Time Threat Mitigation [1.4999444543328293]
We propose a hybrid AI-driven cybersecurity framework to enhance real-time vulnerability detection, threat modelling, and automated remediation.<n>Our findings provide actionable insights to strengthen the security and resilience of critical infrastructure systems against emerging cyber threats.
arXiv Detail & Related papers (2025-07-10T04:17:29Z) - AI threats to national security can be countered through an incident regime [55.2480439325792]
We propose a legally mandated post-deployment AI incident regime that aims to counter potential national security threats from AI systems.<n>Our proposed AI incident regime is split into three phases. The first phase revolves around a novel operationalization of what counts as an 'AI incident'<n>The second and third phases spell out that AI providers should notify a government agency about incidents, and that the government agency should be involved in amending AI providers' security and safety procedures.
arXiv Detail & Related papers (2025-03-25T17:51:50Z) - Superintelligence Strategy: Expert Version [64.7113737051525]
Destabilizing AI developments could raise the odds of great-power conflict.<n>Superintelligence -- AI vastly better than humans at nearly all cognitive tasks -- is now anticipated by AI researchers.<n>We introduce the concept of Mutual Assured AI Malfunction.
arXiv Detail & Related papers (2025-03-07T17:53:24Z) - Agentic AI and the Cyber Arms Race [3.0198881680567635]
Agentic AI is shifting the cybersecurity landscape as attackers and defenders leverage AI agents to augment humans and automate common tasks.<n>In this article, we examine the implications for cyber warfare and global politics as Agentic AI becomes more powerful and enables the broad proliferation of capabilities only available to the most well resourced actors today.
arXiv Detail & Related papers (2025-02-10T16:06:29Z) - Cyber Shadows: Neutralizing Security Threats with AI and Targeted Policy Measures [0.0]
Cyber threats pose risks at individual, organizational, and societal levels.<n>This paper proposes a comprehensive cybersecurity strategy that integrates AI-driven solutions with targeted policy interventions.
arXiv Detail & Related papers (2025-01-03T09:26:50Z) - Position: Mind the Gap-the Growing Disconnect Between Established Vulnerability Disclosure and AI Security [56.219994752894294]
We argue that adapting existing processes for AI security reporting is doomed to fail due to fundamental shortcomings for the distinctive characteristics of AI systems.<n>Based on our proposal to address these shortcomings, we discuss an approach to AI security reporting and how the new AI paradigm, AI agents, will further reinforce the need for specialized AI security incident reporting advancements.
arXiv Detail & Related papers (2024-12-19T13:50:26Z) - Is Generative AI the Next Tactical Cyber Weapon For Threat Actors? Unforeseen Implications of AI Generated Cyber Attacks [0.0]
This paper delves into the escalating threat posed by the misuse of AI, specifically through the use of Large Language Models (LLMs)
Through a series of controlled experiments, the paper demonstrates how these models can be manipulated to bypass ethical and privacy safeguards to effectively generate cyber attacks.
We also introduce Occupy AI, a customized, finetuned LLM specifically engineered to automate and execute cyberattacks.
arXiv Detail & Related papers (2024-08-23T02:56:13Z) - AI-Powered Autonomous Weapons Risk Geopolitical Instability and Threaten AI Research [6.96356867602455]
We argue that the recent embrace of machine learning in the development of autonomous weapons systems (AWS) creates serious risks to geopolitical stability and the free exchange of ideas in AI research.
ML is already enabling the substitution of AWS for human soldiers in many battlefield roles, reducing the upfront human cost, and thus political cost, of waging offensive war.
Further, the military value of AWS raises the specter of an AI-powered arms race and the misguided imposition of national security restrictions on AI research.
arXiv Detail & Related papers (2024-05-03T05:19:45Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - Proceedings of the Artificial Intelligence for Cyber Security (AICS)
Workshop at AAAI 2022 [55.573187938617636]
The workshop will focus on the application of AI to problems in cyber security.
Cyber systems generate large volumes of data, utilizing this effectively is beyond human capabilities.
arXiv Detail & Related papers (2022-02-28T18:27:41Z) - Adversarial Machine Learning Attacks and Defense Methods in the Cyber
Security Domain [58.30296637276011]
This paper summarizes the latest research on adversarial attacks against security solutions based on machine learning techniques.
It is the first to discuss the unique challenges of implementing end-to-end adversarial attacks in the cyber security domain.
arXiv Detail & Related papers (2020-07-05T18:22:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.