Securing the AI Supply Chain: What Can We Learn From Developer-Reported Security Issues and Solutions of AI Projects?
- URL: http://arxiv.org/abs/2512.23385v1
- Date: Mon, 29 Dec 2025 11:22:11 GMT
- Title: Securing the AI Supply Chain: What Can We Learn From Developer-Reported Security Issues and Solutions of AI Projects?
- Authors: The Anh Nguyen, Triet Huynh Minh Le, M. Ali Babar,
- Abstract summary: We conduct an empirical investigation of developer-reported issues and solutions based on discussions from Hugging Face and GitHub.<n>This pipeline produces a dataset of 312,868 security discussions.<n>We conduct a thematic analysis of 753 posts sampled from our dataset and uncover a fine-grained taxonomy of 32 security issues and 24 solutions.
- Score: 0.5352699766206809
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The rapid growth of Artificial Intelligence (AI) models and applications has led to an increasingly complex security landscape. Developers of AI projects must contend not only with traditional software supply chain issues but also with novel, AI-specific security threats. However, little is known about what security issues are commonly encountered and how they are resolved in practice. This gap hinders the development of effective security measures for each component of the AI supply chain. We bridge this gap by conducting an empirical investigation of developer-reported issues and solutions, based on discussions from Hugging Face and GitHub. To identify security-related discussions, we develop a pipeline that combines keyword matching with an optimal fine-tuned distilBERT classifier, which achieved the best performance in our extensive comparison of various deep learning and large language models. This pipeline produces a dataset of 312,868 security discussions, providing insights into the security reporting practices of AI applications and projects. We conduct a thematic analysis of 753 posts sampled from our dataset and uncover a fine-grained taxonomy of 32 security issues and 24 solutions across four themes: (1) System and Software, (2) External Tools and Ecosystem, (3) Model, and (4) Data. We reveal that many security issues arise from the complex dependencies and black-box nature of AI components. Notably, challenges related to Models and Data often lack concrete solutions. Our insights can offer evidence-based guidance for developers and researchers to address real-world security threats across the AI supply chain.
Related papers
- Cyber Threat Intelligence for Artificial Intelligence Systems [0.0]
We investigate how cyber threat intelligence may evolve to address attacks that target AI systems.<n>We outline what an AI-oriented threat intelligence knowledge base should contain.<n>We discuss techniques for measuring similarity between collected indicators and newly observed AI artifacts.
arXiv Detail & Related papers (2026-03-05T11:37:47Z) - Securing Agentic AI Systems -- A Multilayer Security Framework [0.0]
Securing Agentic Artificial Intelligence (AI) systems requires addressing the complex cyber risks introduced by autonomous, decision-making, and adaptive behaviors.<n>Existing AI security frameworks do not adequately address these challenges or the unique nuances of agentic AI.<n>This research develops a lifecycle-aware security framework specifically designed for agentic AI systems.
arXiv Detail & Related papers (2025-12-19T20:22:25Z) - Cisco Integrated AI Security and Safety Framework Report [3.162988913169078]
This paper presents Cisco's Integrated AI Security and Safety Framework ("AI Security Framework")<n>The framework can be used to classify, integrate, and operationalize the full range of AI risks.<n>It integrates AI security and AI safety across modalities, agents, pipelines, and the broader ecosystem.
arXiv Detail & Related papers (2025-12-15T02:12:12Z) - Never Compromise to Vulnerabilities: A Comprehensive Survey on AI Governance [211.5823259429128]
We propose a comprehensive framework integrating technical and societal dimensions, structured around three interconnected pillars: Intrinsic Security, Derivative Security, and Social Ethics.<n>We identify three core challenges: (1) the generalization gap, where defenses fail against evolving threats; (2) inadequate evaluation protocols that overlook real-world risks; and (3) fragmented regulations leading to inconsistent oversight.<n>Our framework offers actionable guidance for researchers, engineers, and policymakers to develop AI systems that are not only robust and secure but also ethically aligned and publicly trustworthy.
arXiv Detail & Related papers (2025-08-12T09:42:56Z) - AI Safety in the Eyes of the Downstream Developer: A First Look at Concerns, Practices, and Challenges [10.342989453672123]
Pre-trained models (PTMs) have become a cornerstone of AI-based software, allowing for rapid integration and development with minimal training overhead.<n>This study investigates developers' concerns, practices and perceived challenges regarding AI safety issues during AI-based software development.
arXiv Detail & Related papers (2025-03-25T08:35:30Z) - AISafetyLab: A Comprehensive Framework for AI Safety Evaluation and Improvement [73.0700818105842]
We introduce AISafetyLab, a unified framework and toolkit that integrates representative attack, defense, and evaluation methodologies for AI safety.<n> AISafetyLab features an intuitive interface that enables developers to seamlessly apply various techniques.<n>We conduct empirical studies on Vicuna, analyzing different attack and defense strategies to provide valuable insights into their comparative effectiveness.
arXiv Detail & Related papers (2025-02-24T02:11:52Z) - Computational Safety for Generative AI: A Signal Processing Perspective [65.268245109828]
computational safety is a mathematical framework that enables the quantitative assessment, formulation, and study of safety challenges in GenAI.<n>We show how sensitivity analysis and loss landscape analysis can be used to detect malicious prompts with jailbreak attempts.<n>We discuss key open research challenges, opportunities, and the essential role of signal processing in computational AI safety.
arXiv Detail & Related papers (2025-02-18T02:26:50Z) - Position: Mind the Gap-the Growing Disconnect Between Established Vulnerability Disclosure and AI Security [56.219994752894294]
We argue that adapting existing processes for AI security reporting is doomed to fail due to fundamental shortcomings for the distinctive characteristics of AI systems.<n>Based on our proposal to address these shortcomings, we discuss an approach to AI security reporting and how the new AI paradigm, AI agents, will further reinforce the need for specialized AI security incident reporting advancements.
arXiv Detail & Related papers (2024-12-19T13:50:26Z) - Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems [88.80306881112313]
We will introduce and define a family of approaches to AI safety, which we will refer to as guaranteed safe (GS) AI.
The core feature of these approaches is that they aim to produce AI systems which are equipped with high-assurance quantitative safety guarantees.
We outline a number of approaches for creating each of these three core components, describe the main technical challenges, and suggest a number of potential solutions to them.
arXiv Detail & Related papers (2024-05-10T17:38:32Z) - Proceedings of the Artificial Intelligence for Cyber Security (AICS)
Workshop at AAAI 2022 [55.573187938617636]
The workshop will focus on the application of AI to problems in cyber security.
Cyber systems generate large volumes of data, utilizing this effectively is beyond human capabilities.
arXiv Detail & Related papers (2022-02-28T18:27:41Z) - Vulnerabilities of Connectionist AI Applications: Evaluation and Defence [0.0]
This article deals with the IT security of connectionist artificial intelligence (AI) applications, focusing on threats to integrity.
A comprehensive list of threats and possible mitigations is presented by reviewing the state-of-the-art literature.
The discussion of mitigations is likewise not restricted to the level of the AI system itself but rather advocates viewing AI systems in the context of their supply chains.
arXiv Detail & Related papers (2020-03-18T12:33:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.