Mind the Gap: Foundation Models and the Covert Proliferation of Military Intelligence, Surveillance, and Targeting
- URL: http://arxiv.org/abs/2410.14831v1
- Date: Fri, 18 Oct 2024 19:04:30 GMT
- Title: Mind the Gap: Foundation Models and the Covert Proliferation of Military Intelligence, Surveillance, and Targeting
- Authors: Heidy Khlaaf, Sarah Myers West, Meredith Whittaker,
- Abstract summary: We show that the inability to prevent personally identifiable information from contributing to ISTAR capabilities may lead to the use and proliferation of military AI technologies by adversaries.
We conclude that in order to secure military systems and limit the proliferation of AI armaments, it may be necessary to insulate military AI systems and personal data from commercial foundation models.
- Score: 0.0
- License:
- Abstract: Discussions regarding the dual use of foundation models and the risks they pose have overwhelmingly focused on a narrow set of use cases and national security directives-in particular, how AI may enable the efficient construction of a class of systems referred to as CBRN: chemical, biological, radiological and nuclear weapons. The overwhelming focus on these hypothetical and narrow themes has occluded a much-needed conversation regarding present uses of AI for military systems, specifically ISTAR: intelligence, surveillance, target acquisition, and reconnaissance. These are the uses most grounded in actual deployments of AI that pose life-or-death stakes for civilians, where misuses and failures pose geopolitical consequences and military escalations. This is particularly underscored by novel proliferation risks specific to the widespread availability of commercial models and the lack of effective approaches that reliably prevent them from contributing to ISTAR capabilities. In this paper, we outline the significant national security concerns emanating from current and envisioned uses of commercial foundation models outside of CBRN contexts, and critique the narrowing of the policy debate that has resulted from a CBRN focus (e.g. compute thresholds, model weight release). We demonstrate that the inability to prevent personally identifiable information from contributing to ISTAR capabilities within commercial foundation models may lead to the use and proliferation of military AI technologies by adversaries. We also show how the usage of foundation models within military settings inherently expands the attack vectors of military systems and the defense infrastructures they interface with. We conclude that in order to secure military systems and limit the proliferation of AI armaments, it may be necessary to insulate military AI systems and personal data from commercial foundation models.
Related papers
- Safety at Scale: A Comprehensive Survey of Large Model Safety [299.801463557549]
We present a comprehensive taxonomy of safety threats to large models, including adversarial attacks, data poisoning, backdoor attacks, jailbreak and prompt injection attacks, energy-latency attacks, data and model extraction attacks, and emerging agent-specific threats.
We identify and discuss the open challenges in large model safety, emphasizing the need for comprehensive safety evaluations, scalable and effective defense mechanisms, and sustainable data practices.
arXiv Detail & Related papers (2025-02-02T05:14:22Z) - Exploring Potential Prompt Injection Attacks in Federated Military LLMs and Their Mitigation [3.0175628677371935]
Federated Learning (FL) is increasingly being adopted in military collaborations to develop Large Language Models (LLMs)
prompt injection attacks-malicious manipulations of input prompts-pose new threats that may undermine operational security, disrupt decision-making, and erode trust among allies.
We propose a human-AI collaborative framework that introduces both technical and policy countermeasures.
arXiv Detail & Related papers (2025-01-30T15:14:55Z) - Fundamental Risks in the Current Deployment of General-Purpose AI Models: What Have We (Not) Learnt From Cybersecurity? [60.629883024152576]
Large Language Models (LLMs) have seen rapid deployment in a wide range of use cases.
OpenAIs Altera are just a few examples of increased autonomy, data access, and execution capabilities.
These methods come with a range of cybersecurity challenges.
arXiv Detail & Related papers (2024-12-19T14:44:41Z) - Attack Atlas: A Practitioner's Perspective on Challenges and Pitfalls in Red Teaming GenAI [52.138044013005]
generative AI, particularly large language models (LLMs), become increasingly integrated into production applications.
New attack surfaces and vulnerabilities emerge and put a focus on adversarial threats in natural language and multi-modal systems.
Red-teaming has gained importance in proactively identifying weaknesses in these systems, while blue-teaming works to protect against such adversarial attacks.
This work aims to bridge the gap between academic insights and practical security measures for the protection of generative AI systems.
arXiv Detail & Related papers (2024-09-23T10:18:10Z) - The GPT Dilemma: Foundation Models and the Shadow of Dual-Use [0.0]
This paper examines the dual-use challenges of foundation models and the risks they pose for international security.
The paper analyzes four critical factors in the development cycle of foundation models: model inputs, capabilities, system use cases, and system deployment.
Using the Intermediate-Range Nuclear Forces (INF) Treaty as a case study, this paper proposes several strategies to mitigate the associated risks.
arXiv Detail & Related papers (2024-07-29T22:36:27Z) - A Technological Perspective on Misuse of Available AI [41.94295877935867]
Potential malicious misuse of civilian artificial intelligence (AI) poses serious threats to security on a national and international level.
We show how already existing and openly available AI technology could be misused.
We develop three exemplary use cases of potentially misused AI that threaten political, digital and physical security.
arXiv Detail & Related papers (2024-03-22T16:30:58Z) - Position Paper: Agent AI Towards a Holistic Intelligence [53.35971598180146]
We emphasize developing Agent AI -- an embodied system that integrates large foundation models into agent actions.
In this paper, we propose a novel large action model to achieve embodied intelligent behavior, the Agent Foundation Model.
arXiv Detail & Related papers (2024-02-28T16:09:56Z) - Computing Power and the Governance of Artificial Intelligence [51.967584623262674]
Governments and companies have started to leverage compute as a means to govern AI.
compute-based policies and technologies have the potential to assist in these areas, but there is significant variation in their readiness for implementation.
naive or poorly scoped approaches to compute governance carry significant risks in areas like privacy, economic impacts, and centralization of power.
arXiv Detail & Related papers (2024-02-13T21:10:21Z) - Killer Apps: Low-Speed, Large-Scale AI Weapons [2.2899177316144943]
Artificial Intelligence (AI) and Machine Learning (ML) advancements present new challenges and opportunities in warfare and security.
This paper explores the concept of AI weapons, their deployment, detection, and potential countermeasures.
arXiv Detail & Related papers (2024-01-14T12:09:40Z) - Escalation Risks from Language Models in Military and Diplomatic
Decision-Making [0.0]
This work aims to scrutinize the behavior of multiple AI agents in simulated wargames.
We design a novel wargame simulation and scoring framework to assess the risks of the escalation of actions taken by these agents.
We observe that models tend to develop arms-race dynamics, leading to greater conflict, and in rare cases, even to the deployment of nuclear weapons.
arXiv Detail & Related papers (2024-01-07T07:59:10Z) - The Role of Large Language Models in the Recognition of Territorial
Sovereignty: An Analysis of the Construction of Legitimacy [67.44950222243865]
We argue that technology tools like Google Maps and Large Language Models (LLM) are often perceived as impartial and objective.
We highlight the case of three controversial territories: Crimea, West Bank and Transnitria, by comparing the responses of ChatGPT against Wikipedia information and United Nations resolutions.
arXiv Detail & Related papers (2023-03-17T08:46:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.