Surveying the Operational Cybersecurity and Supply Chain Threat Landscape when Developing and Deploying AI Systems
- URL: http://arxiv.org/abs/2508.20307v1
- Date: Wed, 27 Aug 2025 22:46:23 GMT
- Title: Surveying the Operational Cybersecurity and Supply Chain Threat Landscape when Developing and Deploying AI Systems
- Authors: Michael R Smith, Joe Ingram,
- Abstract summary: This paper serves to raise awareness of the novel cyber threats that are introduced when incorporating AI into a software system.<n>We explore the operational cybersecurity and supply chain risks across the AI lifecycle.<n>By understanding these risks, organizations can better protect AI systems and ensure their reliability and resilience.
- Score: 0.08594140167290097
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The rise of AI has transformed the software and hardware landscape, enabling powerful capabilities through specialized infrastructures, large-scale data storage, and advanced hardware. However, these innovations introduce unique attack surfaces and objectives which traditional cybersecurity assessments often overlook. Cyber attackers are shifting their objectives from conventional goals like privilege escalation and network pivoting to manipulating AI outputs to achieve desired system effects, such as slowing system performance, flooding outputs with false positives, or degrading model accuracy. This paper serves to raise awareness of the novel cyber threats that are introduced when incorporating AI into a software system. We explore the operational cybersecurity and supply chain risks across the AI lifecycle, emphasizing the need for tailored security frameworks to address evolving threats in the AI-driven landscape. We highlight previous exploitations and provide insights from working in this area. By understanding these risks, organizations can better protect AI systems and ensure their reliability and resilience.
Related papers
- Neuro-Symbolic AI for Cybersecurity: State of the Art, Challenges, and Opportunities [13.175694396580184]
Neuro-Symbolic (NeSy) AI has emerged with the potential to revolutionize cybersecurity AI.<n>We systematically characterize this field by analyzing 127 publications spanning 2019-July 2025.<n>We show that causal reasoning integration is the most transformative advancement, enabling proactive defense beyond correlation-based approaches.
arXiv Detail & Related papers (2025-09-08T17:33:59Z) - ANNIE: Be Careful of Your Robots [48.89876809734855]
We present the first systematic study of adversarial safety attacks on embodied AI systems.<n>We show attack success rates exceeding 50% across all safety categories.<n>Results expose a previously underexplored but highly consequential attack surface in embodied AI systems.
arXiv Detail & Related papers (2025-09-03T15:00:28Z) - Autonomous AI-based Cybersecurity Framework for Critical Infrastructure: Real-Time Threat Mitigation [1.4999444543328293]
We propose a hybrid AI-driven cybersecurity framework to enhance real-time vulnerability detection, threat modelling, and automated remediation.<n>Our findings provide actionable insights to strengthen the security and resilience of critical infrastructure systems against emerging cyber threats.
arXiv Detail & Related papers (2025-07-10T04:17:29Z) - AI threats to national security can be countered through an incident regime [55.2480439325792]
We propose a legally mandated post-deployment AI incident regime that aims to counter potential national security threats from AI systems.<n>Our proposed AI incident regime is split into three phases. The first phase revolves around a novel operationalization of what counts as an 'AI incident'<n>The second and third phases spell out that AI providers should notify a government agency about incidents, and that the government agency should be involved in amending AI providers' security and safety procedures.
arXiv Detail & Related papers (2025-03-25T17:51:50Z) - Transforming Cyber Defense: Harnessing Agentic and Frontier AI for Proactive, Ethical Threat Intelligence [0.0]
This manuscript explores how the convergence of agentic AI and Frontier AI is transforming cybersecurity.<n>We examine the roles of real time monitoring, automated incident response, and perpetual learning in forging a resilient, dynamic defense ecosystem.<n>Our vision is to harmonize technological innovation with unwavering ethical oversight, ensuring that future AI driven security solutions uphold core human values of fairness, transparency, and accountability while effectively countering emerging cyber threats.
arXiv Detail & Related papers (2025-02-28T20:23:35Z) - Cyber Shadows: Neutralizing Security Threats with AI and Targeted Policy Measures [0.0]
Cyber threats pose risks at individual, organizational, and societal levels.<n>This paper proposes a comprehensive cybersecurity strategy that integrates AI-driven solutions with targeted policy interventions.
arXiv Detail & Related papers (2025-01-03T09:26:50Z) - Position: Mind the Gap-the Growing Disconnect Between Established Vulnerability Disclosure and AI Security [56.219994752894294]
We argue that adapting existing processes for AI security reporting is doomed to fail due to fundamental shortcomings for the distinctive characteristics of AI systems.<n>Based on our proposal to address these shortcomings, we discuss an approach to AI security reporting and how the new AI paradigm, AI agents, will further reinforce the need for specialized AI security incident reporting advancements.
arXiv Detail & Related papers (2024-12-19T13:50:26Z) - Considerations Influencing Offense-Defense Dynamics From Artificial Intelligence [0.0]
AI can enhance defensive capabilities but also presents avenues for malicious exploitation and large-scale societal harm.<n>This paper proposes a taxonomy to map and examine the key factors that influence whether AI systems predominantly pose threats or offer protective benefits to society.
arXiv Detail & Related papers (2024-12-05T10:05:53Z) - Exploring the Adversarial Vulnerabilities of Vision-Language-Action Models in Robotics [68.36528819227641]
This paper systematically evaluates the robustness of Vision-Language-Action (VLA) models.<n>We introduce two untargeted attack objectives that leverage spatial foundations to destabilize robotic actions, and a targeted attack objective that manipulates the robotic trajectory.<n>We design an adversarial patch generation approach that places a small, colorful patch within the camera's view, effectively executing the attack in both digital and physical environments.
arXiv Detail & Related papers (2024-11-18T01:52:20Z) - Countering Autonomous Cyber Threats [40.00865970939829]
Foundation Models present dual-use concerns broadly and within the cyber domain specifically.
Recent research has shown the potential for these advanced models to inform or independently execute offensive cyberspace operations.
This work evaluates several state-of-the-art FMs on their ability to compromise machines in an isolated network and investigates defensive mechanisms to defeat such AI-powered attacks.
arXiv Detail & Related papers (2024-10-23T22:46:44Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.