AIBoMGen: Generating an AI Bill of Materials for Secure, Transparent, and Compliant Model Training
- URL: http://arxiv.org/abs/2601.05703v1
- Date: Fri, 09 Jan 2026 10:46:42 GMT
- Title: AIBoMGen: Generating an AI Bill of Materials for Secure, Transparent, and Compliant Model Training
- Authors: Wiebe Vandendriessche, Jordi Thijsman, Laurens D'hooge, Bruno Volckaert, Merlijn Sebrechts,
- Abstract summary: The AI Bill of Materials (AIBOM) is introduced as a standardized, verifiable record of trained AI models and their environments.<n>Our proof-of-concept platform, AIBoMGen, automates the generation of signed AIBOMs by capturing datasets, model metadata, and environment details during training.
- Score: 1.2520011735093362
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The rapid adoption of complex AI systems has outpaced the development of tools to ensure their transparency, security, and regulatory compliance. In this paper, the AI Bill of Materials (AIBOM), an extension of the Software Bill of Materials (SBOM), is introduced as a standardized, verifiable record of trained AI models and their environments. Our proof-of-concept platform, AIBoMGen, automates the generation of signed AIBOMs by capturing datasets, model metadata, and environment details during training. The training platform acts as a neutral, third-party observer and root of trust. It enforces verifiable AIBOM creation for every job. The system uses cryptographic hashing, digital signatures, and in-toto attestations to ensure integrity and protect against threats such as artifact tampering by dishonest model creators. Our evaluation demonstrates that AIBoMGen reliably detects unauthorized modifications to all artifacts and can generate AIBOMs with negligible performance overhead. These results highlight the potential of AIBoMGen as a foundational step toward building secure and transparent AI ecosystems, enabling compliance with regulatory frameworks like the EUs AI Act.
Related papers
- CaMeLs Can Use Computers Too: System-level Security for Computer Use Agents [60.98294016925157]
AI agents are vulnerable to prompt injection attacks, where malicious content hijacks agent behavior to steal credentials or cause financial loss.<n>We introduce Single-Shot Planning for CUAs, where a trusted planner generates a complete execution graph with conditional branches before any observation of potentially malicious content.<n>Although this architectural isolation successfully prevents instruction injections, we show that additional measures are needed to prevent Branch Steering attacks.
arXiv Detail & Related papers (2026-01-14T23:06:35Z) - AI Bill of Materials and Beyond: Systematizing Security Assurance through the AI Risk Scanning (AIRS) Framework [31.261980405052938]
Assurance for artificial intelligence (AI) systems remains fragmented across software supply-chain security, adversarial machine learning, and governance documentation.<n>This paper introduces the AI Risk Scanning (AIRS) Framework, a threat-model-based, evidence-generating framework designed to operationalize AI assurance.
arXiv Detail & Related papers (2025-11-16T16:10:38Z) - TAIBOM: Bringing Trustworthiness to AI-Enabled Systems [0.23332469289621785]
Software Bills of Materials (SBOMs) have become critical for enhancing transparency and traceability.<n>Current frameworks fall short in capturing the unique characteristics of AI systems.<n>We introduce Trusted AI Bill of Materials (TAIBOM) -- a novel framework extending SBOM principles to the AI domain.
arXiv Detail & Related papers (2025-10-02T16:17:07Z) - Agentic AI for Financial Crime Compliance [0.0]
This paper presents the design and deployment of an agentic AI system for financial crime compliance (FCC) in digitally native financial platforms.<n>The contribution includes a reference architecture, a real-world prototype, and insights into how Agentic AI can reconfigure under regulatory constraints.
arXiv Detail & Related papers (2025-09-16T14:53:51Z) - Safe and Certifiable AI Systems: Concepts, Challenges, and Lessons Learned [45.44933002008943]
This white paper presents the T"UV AUSTRIA Trusted AI framework.<n>It is an end-to-end audit catalog and methodology for assessing and certifying machine learning systems.<n>Building on three pillars - Secure Software Development, Functional Requirements, and Ethics & Data Privacy - it translates the high-level obligations of the EU AI Act into specific, testable criteria.
arXiv Detail & Related papers (2025-09-08T17:52:08Z) - TechOps: Technical Documentation Templates for the AI Act [4.205946699819021]
This paper introduces open-source templates and examples for documenting data, models, and applications.<n>These templates track the system status over the entire AI lifecycle.<n>They also promote discoverability and collaboration, reduce risks, and align with best practices in AI documentation and governance.
arXiv Detail & Related papers (2025-08-12T09:58:33Z) - Rethinking Data Protection in the (Generative) Artificial Intelligence Era [138.07763415496288]
We propose a four-level taxonomy that captures the diverse protection needs arising in modern (generative) AI models and systems.<n>Our framework offers a structured understanding of the trade-offs between data utility and control, spanning the entire AI pipeline.
arXiv Detail & Related papers (2025-07-03T02:45:51Z) - AI Model Passport: Data and System Traceability Framework for Transparent AI in Health [4.024232575199211]
This paper introduces the concept of the AI Model Passport, a structured and standardized documentation framework.<n>It captures essential metadata to uniquely identify, verify, trace and monitor AI models across their lifecycle.<n>An implementation of this framework is presented through AIPassport, an MLOps tool developed within the ProCAncer-I EU project for medical imaging applications.
arXiv Detail & Related papers (2025-06-27T16:16:15Z) - Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - Guiding AI-Generated Digital Content with Wireless Perception [69.51950037942518]
We introduce an integration of wireless perception with AI-generated content (AIGC) to improve the quality of digital content production.
The framework employs a novel multi-scale perception technology to read user's posture, which is difficult to describe accurately in words, and transmits it to the AIGC model as skeleton images.
Since the production process imposes the user's posture as a constraint on the AIGC model, it makes the generated content more aligned with the user's requirements.
arXiv Detail & Related papers (2023-03-26T04:39:03Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.