Securing AI Systems: A Guide to Known Attacks and Impacts
- URL: http://arxiv.org/abs/2506.23296v1
- Date: Sun, 29 Jun 2025 15:32:03 GMT
- Title: Securing AI Systems: A Guide to Known Attacks and Impacts
- Authors: Naoto Kiribuchi, Kengo Zenitani, Takayuki Semitsu,
- Abstract summary: This paper provides an overview of adversarial attacks unique to predictive and generative AI systems.<n>We identify eleven major attack types and explicitly link attack techniques to their impacts.<n>We aim to equip researchers, developers, security practitioners, and policymakers, with foundational knowledge to recognize AI-specific risks and implement effective defenses.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Embedded into information systems, artificial intelligence (AI) faces security threats that exploit AI-specific vulnerabilities. This paper provides an accessible overview of adversarial attacks unique to predictive and generative AI systems. We identify eleven major attack types and explicitly link attack techniques to their impacts -- including information leakage, system compromise, and resource exhaustion -- mapped to the confidentiality, integrity, and availability (CIA) security triad. We aim to equip researchers, developers, security practitioners, and policymakers, even those without specialized AI security expertise, with foundational knowledge to recognize AI-specific risks and implement effective defenses, thereby enhancing the overall security posture of AI systems.
Related papers
- Offensive Security for AI Systems: Concepts, Practices, and Applications [0.0]
Traditional defensive measures often fall short against the unique and evolving threats facing AI-driven technologies.<n>This paper emphasizes proactive threat simulation and adversarial testing to uncover vulnerabilities throughout the AI lifecycle.
arXiv Detail & Related papers (2025-05-09T18:58:56Z) - Security-First AI: Foundations for Robust and Trustworthy Systems [0.0]
This manuscript posits that AI security must be prioritized as a foundational layer.<n>We argue for a security-first approach to enable trustworthy and resilient AI systems.
arXiv Detail & Related papers (2025-04-17T22:53:01Z) - An Approach to Technical AGI Safety and Security [72.83728459135101]
We develop an approach to address the risk of harms consequential enough to significantly harm humanity.<n>We focus on technical approaches to misuse and misalignment.<n>We briefly outline how these ingredients could be combined to produce safety cases for AGI systems.
arXiv Detail & Related papers (2025-04-02T15:59:31Z) - Open Problems in Machine Unlearning for AI Safety [61.43515658834902]
Machine unlearning -- the ability to selectively forget or suppress specific types of knowledge -- has shown promise for privacy and data removal tasks.<n>In this paper, we identify key limitations that prevent unlearning from serving as a comprehensive solution for AI safety.
arXiv Detail & Related papers (2025-01-09T03:59:10Z) - Position: Mind the Gap-the Growing Disconnect Between Established Vulnerability Disclosure and AI Security [56.219994752894294]
We argue that adapting existing processes for AI security reporting is doomed to fail due to fundamental shortcomings for the distinctive characteristics of AI systems.<n>Based on our proposal to address these shortcomings, we discuss an approach to AI security reporting and how the new AI paradigm, AI agents, will further reinforce the need for specialized AI security incident reporting advancements.
arXiv Detail & Related papers (2024-12-19T13:50:26Z) - Security of and by Generative AI platforms [0.0]
This whitepaper highlights the dual importance of securing generative AI (genAI) platforms and leveraging genAI for cybersecurity.
As genAI technologies proliferate, their misuse poses significant risks, including data breaches, model tampering, and malicious content generation.
The whitepaper explores strategies for robust security frameworks around genAI systems, while also showcasing how genAI can empower organizations to anticipate, detect, and mitigate sophisticated cyber threats.
arXiv Detail & Related papers (2024-10-15T15:27:05Z) - Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems [88.80306881112313]
We will introduce and define a family of approaches to AI safety, which we will refer to as guaranteed safe (GS) AI.
The core feature of these approaches is that they aim to produce AI systems which are equipped with high-assurance quantitative safety guarantees.
We outline a number of approaches for creating each of these three core components, describe the main technical challenges, and suggest a number of potential solutions to them.
arXiv Detail & Related papers (2024-05-10T17:38:32Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - CTI4AI: Threat Intelligence Generation and Sharing after Red Teaming AI
Models [0.0]
A need to identify system vulnerabilities, potential threats, characterize properties that will enhance system robustness.
A secondary need is to share this AI security threat intelligence between different stakeholders like, model developers, users, and AI/ML security professionals.
In this paper, we create and describe a prototype system CTI4AI, to overcome the need to methodically identify and share AI/ML specific vulnerabilities and threat intelligence.
arXiv Detail & Related papers (2022-08-16T00:16:58Z) - A System for Automated Open-Source Threat Intelligence Gathering and
Management [53.65687495231605]
SecurityKG is a system for automated OSCTI gathering and management.
It uses a combination of AI and NLP techniques to extract high-fidelity knowledge about threat behaviors.
arXiv Detail & Related papers (2021-01-19T18:31:35Z) - Vulnerabilities of Connectionist AI Applications: Evaluation and Defence [0.0]
This article deals with the IT security of connectionist artificial intelligence (AI) applications, focusing on threats to integrity.
A comprehensive list of threats and possible mitigations is presented by reviewing the state-of-the-art literature.
The discussion of mitigations is likewise not restricted to the level of the AI system itself but rather advocates viewing AI systems in the context of their supply chains.
arXiv Detail & Related papers (2020-03-18T12:33:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.