AI Emergency Preparedness: Examining the federal government's ability to detect and respond to AI-related national security threats
- URL: http://arxiv.org/abs/2407.17347v2
- Date: Sat, 27 Jul 2024 15:49:58 GMT
- Title: AI Emergency Preparedness: Examining the federal government's ability to detect and respond to AI-related national security threats
- Authors: Akash Wasil, Everett Smith, Corin Katzke, Justin Bullock,
- Abstract summary: Emergency preparedness can improve the government's ability to monitor and predict AI progress.
We focus on three plausible risk scenarios: (1) loss of control (threats from a powerful AI system that becomes capable of escaping human control), (2) cybersecurity threats from malicious actors, and (3) biological weapons proliferation.
- Score: 0.2008854179910039
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We examine how the federal government can enhance its AI emergency preparedness: the ability to detect and prepare for time-sensitive national security threats relating to AI. Emergency preparedness can improve the government's ability to monitor and predict AI progress, identify national security threats, and prepare effective response plans for plausible threats and worst-case scenarios. Our approach draws from fields in which experts prepare for threats despite uncertainty about their exact nature or timing (e.g., counterterrorism, cybersecurity, pandemic preparedness). We focus on three plausible risk scenarios: (1) loss of control (threats from a powerful AI system that becomes capable of escaping human control), (2) cybersecurity threats from malicious actors (threats from a foreign actor that steals the model weights of a powerful AI system), and (3) biological weapons proliferation (threats from users identifying a way to circumvent the safeguards of a publicly-released model in order to develop biological weapons.) We evaluate the federal government's ability to detect, prevent, and respond to these threats. Then, we highlight potential gaps and offer recommendations to improve emergency preparedness. We conclude by describing how future work on AI emergency preparedness can be applied to improve policymakers' understanding of risk scenarios, identify gaps in detection capabilities, and form preparedness plans to improve the effectiveness of federal responses to AI-related national security threats.
Related papers
- The Shadow of Fraud: The Emerging Danger of AI-powered Social Engineering and its Possible Cure [30.431292911543103]
Social engineering (SE) attacks remain a significant threat to both individuals and organizations.
The advancement of Artificial Intelligence (AI) has potentially intensified these threats by enabling more personalized and convincing attacks.
This survey paper categorizes SE attack mechanisms, analyzes their evolution, and explores methods for measuring these threats.
arXiv Detail & Related papers (2024-07-22T17:37:31Z) - Coordinated Disclosure of Dual-Use Capabilities: An Early Warning System for Advanced AI [0.0]
We propose Coordinated Disclosure of Dual-Use Capabilities (CDDC) as a process to guide early information-sharing between advanced AI developers, U.S. government agencies, and other private sector actors.
This aims to provide the U.S. government, dual-use foundation model developers, and other actors with an overview of AI capabilities that could significantly impact public safety and security, as well as maximal time to respond.
arXiv Detail & Related papers (2024-07-01T16:09:54Z) - Near to Mid-term Risks and Opportunities of Open-Source Generative AI [94.06233419171016]
Applications of Generative AI are expected to revolutionize a number of different areas, ranging from science & medicine to education.
The potential for these seismic changes has triggered a lively debate about potential risks and resulted in calls for tighter regulation.
This regulation is likely to put at risk the budding field of open-source Generative AI.
arXiv Detail & Related papers (2024-04-25T21:14:24Z) - Taking control: Policies to address extinction risks from AI [0.0]
We argue that voluntary commitments from AI companies would be an inappropriate and insufficient response.
We describe three policy proposals that would meaningfully address the threats from advanced AI.
arXiv Detail & Related papers (2023-10-31T15:53:14Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - AI Potentiality and Awareness: A Position Paper from the Perspective of
Human-AI Teaming in Cybersecurity [18.324118502535775]
We argue that human-AI teaming is worthwhile in cybersecurity.
We emphasize the importance of a balanced approach that incorporates AI's computational power with human expertise.
arXiv Detail & Related papers (2023-09-28T01:20:44Z) - An Overview of Catastrophic AI Risks [38.84933208563934]
This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories.
Malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs.
organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents.
rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans.
arXiv Detail & Related papers (2023-06-21T03:35:06Z) - Illusory Attacks: Information-Theoretic Detectability Matters in Adversarial Attacks [76.35478518372692]
We introduce epsilon-illusory, a novel form of adversarial attack on sequential decision-makers.
Compared to existing attacks, we empirically find epsilon-illusory to be significantly harder to detect with automated methods.
Our findings suggest the need for better anomaly detectors, as well as effective hardware- and system-level defenses.
arXiv Detail & Related papers (2022-07-20T19:49:09Z) - Towards Automated Classification of Attackers' TTPs by combining NLP
with ML Techniques [77.34726150561087]
We evaluate and compare different Natural Language Processing (NLP) and machine learning techniques used for security information extraction in research.
Based on our investigations we propose a data processing pipeline that automatically classifies unstructured text according to attackers' tactics and techniques.
arXiv Detail & Related papers (2022-07-18T09:59:21Z) - Proceedings of the Artificial Intelligence for Cyber Security (AICS)
Workshop at AAAI 2022 [55.573187938617636]
The workshop will focus on the application of AI to problems in cyber security.
Cyber systems generate large volumes of data, utilizing this effectively is beyond human capabilities.
arXiv Detail & Related papers (2022-02-28T18:27:41Z) - Certifiers Make Neural Networks Vulnerable to Availability Attacks [70.69104148250614]
We show for the first time that fallback strategies can be deliberately triggered by an adversary.
In addition to naturally occurring abstains for some inputs and perturbations, the adversary can use training-time attacks to deliberately trigger the fallback.
We design two novel availability attacks, which show the practical relevance of these threats.
arXiv Detail & Related papers (2021-08-25T15:49:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.